Test Report: KVM_Linux_crio 21801

                    
                      3dc60e2e5dc0007721440fd051e7cba5635b79e7:2025-10-27:42091
                    
                

Test fail (14/336)

x
+
TestAddons/parallel/Ingress (157.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-864929 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-864929 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-864929 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [9e5f3a97-dcd1-44e6-920b-2953ee6ba066] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [9e5f3a97-dcd1-44e6-920b-2953ee6ba066] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003823877s
I1027 18:59:23.340665   62705 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-864929 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-864929 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.728178163s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-864929 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-864929 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.216
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-864929 -n addons-864929
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-864929 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-864929 logs -n 25: (1.435346076s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-343850 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-343850 │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ delete  │ -p download-only-343850                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-343850 │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ delete  │ -p download-only-021762                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-021762 │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ delete  │ -p download-only-343850                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-343850 │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ start   │ --download-only -p binary-mirror-001257 --alsologtostderr --binary-mirror http://127.0.0.1:33585 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-001257 │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	│ delete  │ -p binary-mirror-001257                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-001257 │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ addons  │ enable dashboard -p addons-864929                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	│ addons  │ disable dashboard -p addons-864929                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	│ start   │ -p addons-864929 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:58 UTC │
	│ addons  │ addons-864929 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:58 UTC │ 27 Oct 25 18:58 UTC │
	│ addons  │ addons-864929 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:58 UTC │ 27 Oct 25 18:59 UTC │
	│ addons  │ enable headlamp -p addons-864929 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ addons  │ addons-864929 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ addons  │ addons-864929 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ addons  │ addons-864929 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-864929                                                                                                                                                                                                                                                                                                                                                                                         │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ addons  │ addons-864929 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ addons  │ addons-864929 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ ip      │ addons-864929 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ addons  │ addons-864929 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ ssh     │ addons-864929 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │                     │
	│ addons  │ addons-864929 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ addons  │ addons-864929 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ ip      │ addons-864929 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 19:01 UTC │ 27 Oct 25 19:01 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 18:56:24
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 18:56:24.622422   63277 out.go:360] Setting OutFile to fd 1 ...
	I1027 18:56:24.622686   63277 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:56:24.622698   63277 out.go:374] Setting ErrFile to fd 2...
	I1027 18:56:24.622702   63277 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:56:24.622910   63277 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-58821/.minikube/bin
	I1027 18:56:24.623413   63277 out.go:368] Setting JSON to false
	I1027 18:56:24.624309   63277 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5935,"bootTime":1761585450,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 18:56:24.624396   63277 start.go:141] virtualization: kvm guest
	I1027 18:56:24.626201   63277 out.go:179] * [addons-864929] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 18:56:24.627811   63277 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 18:56:24.627823   63277 notify.go:220] Checking for updates...
	I1027 18:56:24.630357   63277 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 18:56:24.631602   63277 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-58821/kubeconfig
	I1027 18:56:24.632948   63277 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-58821/.minikube
	I1027 18:56:24.634382   63277 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 18:56:24.635581   63277 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 18:56:24.637140   63277 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 18:56:24.668548   63277 out.go:179] * Using the kvm2 driver based on user configuration
	I1027 18:56:24.669928   63277 start.go:305] selected driver: kvm2
	I1027 18:56:24.669964   63277 start.go:925] validating driver "kvm2" against <nil>
	I1027 18:56:24.669977   63277 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 18:56:24.670794   63277 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1027 18:56:24.671024   63277 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 18:56:24.671068   63277 cni.go:84] Creating CNI manager for ""
	I1027 18:56:24.671115   63277 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 18:56:24.671129   63277 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1027 18:56:24.671178   63277 start.go:349] cluster config:
	{Name:addons-864929 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-864929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1027 18:56:24.671272   63277 iso.go:125] acquiring lock: {Name:mkbd04910579486806c142a651be4f82498c73ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 18:56:24.672823   63277 out.go:179] * Starting "addons-864929" primary control-plane node in "addons-864929" cluster
	I1027 18:56:24.674049   63277 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 18:56:24.674093   63277 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-58821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 18:56:24.674104   63277 cache.go:58] Caching tarball of preloaded images
	I1027 18:56:24.674220   63277 preload.go:233] Found /home/jenkins/minikube-integration/21801-58821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 18:56:24.674236   63277 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 18:56:24.674548   63277 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/config.json ...
	I1027 18:56:24.674571   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/config.json: {Name:mk9ba1259c08877b5975916a854db91dcc4ee818 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:24.674732   63277 start.go:360] acquireMachinesLock for addons-864929: {Name:mk93a855054c8dcf81931234082a94fdc68a4726 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1027 18:56:24.674798   63277 start.go:364] duration metric: took 48.986µs to acquireMachinesLock for "addons-864929"
	I1027 18:56:24.674823   63277 start.go:93] Provisioning new machine with config: &{Name:addons-864929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-864929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 18:56:24.674873   63277 start.go:125] createHost starting for "" (driver="kvm2")
	I1027 18:56:24.676393   63277 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1027 18:56:24.676558   63277 start.go:159] libmachine.API.Create for "addons-864929" (driver="kvm2")
	I1027 18:56:24.676590   63277 client.go:168] LocalClient.Create starting
	I1027 18:56:24.676678   63277 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem
	I1027 18:56:24.780202   63277 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/cert.pem
	I1027 18:56:24.900124   63277 main.go:141] libmachine: creating domain...
	I1027 18:56:24.900145   63277 main.go:141] libmachine: creating network...
	I1027 18:56:24.901617   63277 main.go:141] libmachine: found existing default network
	I1027 18:56:24.901796   63277 main.go:141] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1027 18:56:24.902284   63277 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d609d0}
	I1027 18:56:24.902387   63277 main.go:141] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-864929</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1027 18:56:24.908158   63277 main.go:141] libmachine: creating private network mk-addons-864929 192.168.39.0/24...
	I1027 18:56:24.980252   63277 main.go:141] libmachine: private network mk-addons-864929 192.168.39.0/24 created
	I1027 18:56:24.980545   63277 main.go:141] libmachine: <network>
	  <name>mk-addons-864929</name>
	  <uuid>aef0d375-daa4-4865-b6ed-55a30809a7b8</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:71:bd:45'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1027 18:56:24.980576   63277 main.go:141] libmachine: setting up store path in /home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929 ...
	I1027 18:56:24.980605   63277 main.go:141] libmachine: building disk image from file:///home/jenkins/minikube-integration/21801-58821/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1027 18:56:24.980620   63277 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21801-58821/.minikube
	I1027 18:56:24.980717   63277 main.go:141] libmachine: Downloading /home/jenkins/minikube-integration/21801-58821/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21801-58821/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
	I1027 18:56:25.217277   63277 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa...
	I1027 18:56:25.365950   63277 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/addons-864929.rawdisk...
	I1027 18:56:25.365998   63277 main.go:141] libmachine: Writing magic tar header
	I1027 18:56:25.366060   63277 main.go:141] libmachine: Writing SSH key tar header
	I1027 18:56:25.366173   63277 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929 ...
	I1027 18:56:25.366260   63277 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929
	I1027 18:56:25.366305   63277 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929 (perms=drwx------)
	I1027 18:56:25.366334   63277 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21801-58821/.minikube/machines
	I1027 18:56:25.366351   63277 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21801-58821/.minikube/machines (perms=drwxr-xr-x)
	I1027 18:56:25.366370   63277 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21801-58821/.minikube
	I1027 18:56:25.366382   63277 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21801-58821/.minikube (perms=drwxr-xr-x)
	I1027 18:56:25.366392   63277 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21801-58821
	I1027 18:56:25.366400   63277 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21801-58821 (perms=drwxrwxr-x)
	I1027 18:56:25.366413   63277 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1027 18:56:25.366429   63277 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1027 18:56:25.366447   63277 main.go:141] libmachine: checking permissions on dir: /home/jenkins
	I1027 18:56:25.366462   63277 main.go:141] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1027 18:56:25.366477   63277 main.go:141] libmachine: checking permissions on dir: /home
	I1027 18:56:25.366489   63277 main.go:141] libmachine: skipping /home - not owner
	I1027 18:56:25.366496   63277 main.go:141] libmachine: defining domain...
	I1027 18:56:25.367845   63277 main.go:141] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-864929</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/addons-864929.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-864929'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1027 18:56:25.373162   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:b7:94:cf in network default
	I1027 18:56:25.374053   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:25.374075   63277 main.go:141] libmachine: starting domain...
	I1027 18:56:25.374080   63277 main.go:141] libmachine: ensuring networks are active...
	I1027 18:56:25.374872   63277 main.go:141] libmachine: Ensuring network default is active
	I1027 18:56:25.375277   63277 main.go:141] libmachine: Ensuring network mk-addons-864929 is active
	I1027 18:56:25.375873   63277 main.go:141] libmachine: getting domain XML...
	I1027 18:56:25.376860   63277 main.go:141] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-864929</name>
	  <uuid>780db33d-391d-49ad-b77a-2a509bc06274</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/addons-864929.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:f3:30:05'/>
	      <source network='mk-addons-864929'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:b7:94:cf'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1027 18:56:26.638954   63277 main.go:141] libmachine: waiting for domain to start...
	I1027 18:56:26.640594   63277 main.go:141] libmachine: domain is now running
	I1027 18:56:26.640612   63277 main.go:141] libmachine: waiting for IP...
	I1027 18:56:26.641493   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:26.642006   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:26.642018   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:26.642278   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:26.642335   63277 retry.go:31] will retry after 204.12408ms: waiting for domain to come up
	I1027 18:56:26.847933   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:26.848726   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:26.848744   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:26.849096   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:26.849145   63277 retry.go:31] will retry after 259.734271ms: waiting for domain to come up
	I1027 18:56:27.110506   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:27.111193   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:27.111211   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:27.111565   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:27.111600   63277 retry.go:31] will retry after 353.747338ms: waiting for domain to come up
	I1027 18:56:27.467217   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:27.467990   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:27.468008   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:27.468404   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:27.468443   63277 retry.go:31] will retry after 408.188052ms: waiting for domain to come up
	I1027 18:56:27.877925   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:27.878585   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:27.878600   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:27.878986   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:27.879025   63277 retry.go:31] will retry after 584.807504ms: waiting for domain to come up
	I1027 18:56:28.465800   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:28.466457   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:28.466477   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:28.466925   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:28.466985   63277 retry.go:31] will retry after 655.104002ms: waiting for domain to come up
	I1027 18:56:29.123804   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:29.124507   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:29.124524   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:29.124825   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:29.124862   63277 retry.go:31] will retry after 1.151715647s: waiting for domain to come up
	I1027 18:56:30.278089   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:30.278736   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:30.278753   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:30.279106   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:30.279148   63277 retry.go:31] will retry after 899.383524ms: waiting for domain to come up
	I1027 18:56:31.180495   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:31.181365   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:31.181386   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:31.181743   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:31.181784   63277 retry.go:31] will retry after 1.154847749s: waiting for domain to come up
	I1027 18:56:32.337959   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:32.338631   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:32.338648   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:32.339016   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:32.339058   63277 retry.go:31] will retry after 1.618753171s: waiting for domain to come up
	I1027 18:56:33.960150   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:33.960873   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:33.960906   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:33.961382   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:33.961433   63277 retry.go:31] will retry after 2.574218898s: waiting for domain to come up
	I1027 18:56:36.537741   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:36.538394   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:36.538410   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:36.538756   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:36.538790   63277 retry.go:31] will retry after 3.021550252s: waiting for domain to come up
	I1027 18:56:39.563948   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:39.564552   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:39.564573   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:39.564876   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:39.564921   63277 retry.go:31] will retry after 3.629212065s: waiting for domain to come up
	I1027 18:56:43.197968   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.198898   63277 main.go:141] libmachine: domain addons-864929 has current primary IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.198915   63277 main.go:141] libmachine: found domain IP: 192.168.39.216
	I1027 18:56:43.198925   63277 main.go:141] libmachine: reserving static IP address...
	I1027 18:56:43.199329   63277 main.go:141] libmachine: unable to find host DHCP lease matching {name: "addons-864929", mac: "52:54:00:f3:30:05", ip: "192.168.39.216"} in network mk-addons-864929
	I1027 18:56:43.451430   63277 main.go:141] libmachine: reserved static IP address 192.168.39.216 for domain addons-864929
	I1027 18:56:43.451477   63277 main.go:141] libmachine: waiting for SSH...
	I1027 18:56:43.451483   63277 main.go:141] libmachine: Getting to WaitForSSH function...
	I1027 18:56:43.455019   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.455546   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:43.455575   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.455753   63277 main.go:141] libmachine: Using SSH client type: native
	I1027 18:56:43.456085   63277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1027 18:56:43.456098   63277 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1027 18:56:43.560285   63277 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 18:56:43.560764   63277 main.go:141] libmachine: domain creation complete
	I1027 18:56:43.562456   63277 machine.go:93] provisionDockerMachine start ...
	I1027 18:56:43.564923   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.565392   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:43.565416   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.565609   63277 main.go:141] libmachine: Using SSH client type: native
	I1027 18:56:43.565938   63277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1027 18:56:43.565959   63277 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 18:56:43.669544   63277 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1027 18:56:43.669580   63277 buildroot.go:166] provisioning hostname "addons-864929"
	I1027 18:56:43.672967   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.673411   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:43.673440   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.673604   63277 main.go:141] libmachine: Using SSH client type: native
	I1027 18:56:43.673806   63277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1027 18:56:43.673817   63277 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-864929 && echo "addons-864929" | sudo tee /etc/hostname
	I1027 18:56:43.795625   63277 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-864929
	
	I1027 18:56:43.798861   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.799296   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:43.799317   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.799492   63277 main.go:141] libmachine: Using SSH client type: native
	I1027 18:56:43.799700   63277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1027 18:56:43.799715   63277 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-864929' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-864929/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-864929' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 18:56:43.910892   63277 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 18:56:43.910939   63277 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21801-58821/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-58821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-58821/.minikube}
	I1027 18:56:43.910981   63277 buildroot.go:174] setting up certificates
	I1027 18:56:43.910994   63277 provision.go:84] configureAuth start
	I1027 18:56:43.913915   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.914336   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:43.914362   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.916504   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.916890   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:43.916954   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.917128   63277 provision.go:143] copyHostCerts
	I1027 18:56:43.917210   63277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-58821/.minikube/ca.pem (1078 bytes)
	I1027 18:56:43.917348   63277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-58821/.minikube/cert.pem (1123 bytes)
	I1027 18:56:43.917476   63277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-58821/.minikube/key.pem (1675 bytes)
	I1027 18:56:43.917558   63277 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-58821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca-key.pem org=jenkins.addons-864929 san=[127.0.0.1 192.168.39.216 addons-864929 localhost minikube]
	I1027 18:56:44.249940   63277 provision.go:177] copyRemoteCerts
	I1027 18:56:44.250009   63277 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 18:56:44.252895   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.253468   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:44.253497   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.253713   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:56:44.336145   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 18:56:44.366470   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1027 18:56:44.396879   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 18:56:44.427777   63277 provision.go:87] duration metric: took 516.764566ms to configureAuth
	I1027 18:56:44.427808   63277 buildroot.go:189] setting minikube options for container-runtime
	I1027 18:56:44.428052   63277 config.go:182] Loaded profile config "addons-864929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:56:44.430830   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.431257   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:44.431285   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.431516   63277 main.go:141] libmachine: Using SSH client type: native
	I1027 18:56:44.431741   63277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1027 18:56:44.431759   63277 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 18:56:44.684141   63277 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 18:56:44.684169   63277 machine.go:96] duration metric: took 1.121694006s to provisionDockerMachine
	I1027 18:56:44.684180   63277 client.go:171] duration metric: took 20.007583494s to LocalClient.Create
	I1027 18:56:44.684313   63277 start.go:167] duration metric: took 20.00763875s to libmachine.API.Create "addons-864929"
	I1027 18:56:44.684443   63277 start.go:293] postStartSetup for "addons-864929" (driver="kvm2")
	I1027 18:56:44.684457   63277 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 18:56:44.684684   63277 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 18:56:44.687967   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.688366   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:44.688388   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.688532   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:56:44.773838   63277 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 18:56:44.779587   63277 info.go:137] Remote host: Buildroot 2025.02
	I1027 18:56:44.779618   63277 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-58821/.minikube/addons for local assets ...
	I1027 18:56:44.779720   63277 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-58821/.minikube/files for local assets ...
	I1027 18:56:44.779744   63277 start.go:296] duration metric: took 95.294071ms for postStartSetup
	I1027 18:56:44.783531   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.783956   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:44.783992   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.784296   63277 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/config.json ...
	I1027 18:56:44.784513   63277 start.go:128] duration metric: took 20.109628328s to createHost
	I1027 18:56:44.787202   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.787607   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:44.787630   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.787827   63277 main.go:141] libmachine: Using SSH client type: native
	I1027 18:56:44.788095   63277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1027 18:56:44.788112   63277 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1027 18:56:44.892155   63277 main.go:141] libmachine: SSH cmd err, output: <nil>: 1761591404.854722623
	
	I1027 18:56:44.892187   63277 fix.go:216] guest clock: 1761591404.854722623
	I1027 18:56:44.892195   63277 fix.go:229] Guest: 2025-10-27 18:56:44.854722623 +0000 UTC Remote: 2025-10-27 18:56:44.784525373 +0000 UTC m=+20.209597039 (delta=70.19725ms)
	I1027 18:56:44.892213   63277 fix.go:200] guest clock delta is within tolerance: 70.19725ms
	I1027 18:56:44.892218   63277 start.go:83] releasing machines lock for "addons-864929", held for 20.217407876s
	I1027 18:56:44.895316   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.895759   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:44.895786   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.896530   63277 ssh_runner.go:195] Run: cat /version.json
	I1027 18:56:44.896625   63277 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 18:56:44.899743   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.899867   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.900211   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:44.900246   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.900407   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:44.900437   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.900431   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:56:44.900649   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:56:44.976028   63277 ssh_runner.go:195] Run: systemctl --version
	I1027 18:56:45.001174   63277 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 18:56:45.161871   63277 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 18:56:45.169373   63277 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 18:56:45.169442   63277 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 18:56:45.190185   63277 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1027 18:56:45.190215   63277 start.go:495] detecting cgroup driver to use...
	I1027 18:56:45.190307   63277 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 18:56:45.209752   63277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 18:56:45.232403   63277 docker.go:218] disabling cri-docker service (if available) ...
	I1027 18:56:45.232474   63277 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 18:56:45.253470   63277 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 18:56:45.271232   63277 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 18:56:45.419310   63277 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 18:56:45.638393   63277 docker.go:234] disabling docker service ...
	I1027 18:56:45.638482   63277 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 18:56:45.655615   63277 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 18:56:45.671872   63277 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 18:56:45.833201   63277 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 18:56:45.978905   63277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 18:56:45.995588   63277 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 18:56:46.019765   63277 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 18:56:46.019841   63277 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:46.033497   63277 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 18:56:46.033570   63277 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:46.047513   63277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:46.060521   63277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:46.074441   63277 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 18:56:46.088325   63277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:46.101213   63277 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:46.122423   63277 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:46.135007   63277 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 18:56:46.146221   63277 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1027 18:56:46.146284   63277 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1027 18:56:46.169839   63277 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 18:56:46.183407   63277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 18:56:46.324987   63277 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 18:56:46.440290   63277 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 18:56:46.440374   63277 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 18:56:46.446158   63277 start.go:563] Will wait 60s for crictl version
	I1027 18:56:46.446240   63277 ssh_runner.go:195] Run: which crictl
	I1027 18:56:46.450614   63277 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1027 18:56:46.496013   63277 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1027 18:56:46.496113   63277 ssh_runner.go:195] Run: crio --version
	I1027 18:56:46.526418   63277 ssh_runner.go:195] Run: crio --version
	I1027 18:56:46.560428   63277 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1027 18:56:46.564607   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:46.565084   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:46.565113   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:46.565366   63277 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1027 18:56:46.570158   63277 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 18:56:46.586255   63277 kubeadm.go:883] updating cluster {Name:addons-864929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-864929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.216 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 18:56:46.586379   63277 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 18:56:46.586431   63277 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 18:56:46.623555   63277 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1027 18:56:46.623625   63277 ssh_runner.go:195] Run: which lz4
	I1027 18:56:46.628237   63277 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1027 18:56:46.633510   63277 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1027 18:56:46.633544   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1027 18:56:48.156071   63277 crio.go:462] duration metric: took 1.527888186s to copy over tarball
	I1027 18:56:48.156150   63277 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1027 18:56:49.783875   63277 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.627696709s)
	I1027 18:56:49.783899   63277 crio.go:469] duration metric: took 1.627800498s to extract the tarball
	I1027 18:56:49.783908   63277 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1027 18:56:49.829229   63277 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 18:56:49.875294   63277 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 18:56:49.875323   63277 cache_images.go:85] Images are preloaded, skipping loading
	I1027 18:56:49.875334   63277 kubeadm.go:934] updating node { 192.168.39.216 8443 v1.34.1 crio true true} ...
	I1027 18:56:49.875442   63277 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-864929 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.216
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-864929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 18:56:49.875581   63277 ssh_runner.go:195] Run: crio config
	I1027 18:56:49.932154   63277 cni.go:84] Creating CNI manager for ""
	I1027 18:56:49.932179   63277 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 18:56:49.932200   63277 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 18:56:49.932223   63277 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.216 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-864929 NodeName:addons-864929 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.216"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.216 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 18:56:49.932364   63277 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.216
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-864929"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.216"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.216"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 18:56:49.932437   63277 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 18:56:49.945627   63277 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 18:56:49.945703   63277 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 18:56:49.959045   63277 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1027 18:56:49.983292   63277 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 18:56:50.007675   63277 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1027 18:56:50.032663   63277 ssh_runner.go:195] Run: grep 192.168.39.216	control-plane.minikube.internal$ /etc/hosts
	I1027 18:56:50.037426   63277 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.216	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 18:56:50.053663   63277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 18:56:50.200983   63277 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 18:56:50.242073   63277 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929 for IP: 192.168.39.216
	I1027 18:56:50.242097   63277 certs.go:195] generating shared ca certs ...
	I1027 18:56:50.242119   63277 certs.go:227] acquiring lock for ca certs: {Name:mk3c1c890b4611f9f1a3f97b9046837227a16799 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:50.242309   63277 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-58821/.minikube/ca.key
	I1027 18:56:50.542245   63277 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-58821/.minikube/ca.crt ...
	I1027 18:56:50.542277   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/ca.crt: {Name:mkb0b7411ce05946b9a6d920de38fad3ab6c6a5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:50.542460   63277 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-58821/.minikube/ca.key ...
	I1027 18:56:50.542471   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/ca.key: {Name:mk283eb2e002819e788fa8f18c386299d47777a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:50.542548   63277 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.key
	I1027 18:56:50.638160   63277 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.crt ...
	I1027 18:56:50.638191   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.crt: {Name:mk8a0909df9310cadf02928e1cc040e0903818db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:50.638365   63277 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.key ...
	I1027 18:56:50.638377   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.key: {Name:mk4aa59bab040235f70f65aa2d7af7f89bd4659d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:50.638460   63277 certs.go:257] generating profile certs ...
	I1027 18:56:50.638519   63277 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.key
	I1027 18:56:50.638549   63277 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt with IP's: []
	I1027 18:56:50.779809   63277 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt ...
	I1027 18:56:50.779847   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: {Name:mka2b9867ee328b7112768834356aaca6b5fc109 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:50.780044   63277 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.key ...
	I1027 18:56:50.780059   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.key: {Name:mkcbab4e1e83774a62e689c6d7789d3eb343f864 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:50.780139   63277 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.key.782a817d
	I1027 18:56:50.780161   63277 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.crt.782a817d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.216]
	I1027 18:56:51.313872   63277 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.crt.782a817d ...
	I1027 18:56:51.313911   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.crt.782a817d: {Name:mk4942a380088e956850812de28b65602aee81bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:51.314117   63277 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.key.782a817d ...
	I1027 18:56:51.314132   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.key.782a817d: {Name:mk2bf51af3cc29c0e7479b746ffe650e8b348547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:51.314226   63277 certs.go:382] copying /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.crt.782a817d -> /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.crt
	I1027 18:56:51.314298   63277 certs.go:386] copying /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.key.782a817d -> /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.key
	I1027 18:56:51.314355   63277 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/proxy-client.key
	I1027 18:56:51.314373   63277 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/proxy-client.crt with IP's: []
	I1027 18:56:51.489257   63277 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/proxy-client.crt ...
	I1027 18:56:51.489292   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/proxy-client.crt: {Name:mk6be1958bd7a086d707056124a43ee705cf8efa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:51.489483   63277 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/proxy-client.key ...
	I1027 18:56:51.489496   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/proxy-client.key: {Name:mkedbe974c66eb2183a2d8824fcd1a064e7f0629 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:51.489667   63277 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca-key.pem (1679 bytes)
	I1027 18:56:51.489699   63277 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem (1078 bytes)
	I1027 18:56:51.489734   63277 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/cert.pem (1123 bytes)
	I1027 18:56:51.489756   63277 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/key.pem (1675 bytes)
	I1027 18:56:51.490337   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 18:56:51.527261   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 18:56:51.566595   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 18:56:51.597942   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1027 18:56:51.630829   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1027 18:56:51.664688   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 18:56:51.696594   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 18:56:51.734852   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 18:56:51.770778   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 18:56:51.805559   63277 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 18:56:51.833421   63277 ssh_runner.go:195] Run: openssl version
	I1027 18:56:51.841743   63277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 18:56:51.857852   63277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 18:56:51.864612   63277 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1027 18:56:51.864680   63277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 18:56:51.873224   63277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 18:56:51.893213   63277 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 18:56:51.899405   63277 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 18:56:51.899464   63277 kubeadm.go:400] StartCluster: {Name:addons-864929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-864929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.216 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 18:56:51.899550   63277 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 18:56:51.899604   63277 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 18:56:51.945935   63277 cri.go:89] found id: ""
	I1027 18:56:51.946016   63277 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 18:56:51.959289   63277 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 18:56:51.972387   63277 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 18:56:51.985164   63277 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 18:56:51.985182   63277 kubeadm.go:157] found existing configuration files:
	
	I1027 18:56:51.985239   63277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 18:56:51.997222   63277 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 18:56:51.997284   63277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 18:56:52.010322   63277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 18:56:52.022203   63277 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 18:56:52.022274   63277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 18:56:52.034805   63277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 18:56:52.046201   63277 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 18:56:52.046272   63277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 18:56:52.059475   63277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 18:56:52.070876   63277 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 18:56:52.070957   63277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 18:56:52.083713   63277 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1027 18:56:52.243337   63277 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 18:57:05.929419   63277 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1027 18:57:05.929514   63277 kubeadm.go:318] [preflight] Running pre-flight checks
	I1027 18:57:05.929629   63277 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 18:57:05.929750   63277 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 18:57:05.929840   63277 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 18:57:05.929894   63277 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 18:57:05.931664   63277 out.go:252]   - Generating certificates and keys ...
	I1027 18:57:05.931750   63277 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1027 18:57:05.931835   63277 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1027 18:57:05.931942   63277 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 18:57:05.932018   63277 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1027 18:57:05.932119   63277 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1027 18:57:05.932200   63277 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1027 18:57:05.932269   63277 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1027 18:57:05.932432   63277 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-864929 localhost] and IPs [192.168.39.216 127.0.0.1 ::1]
	I1027 18:57:05.932514   63277 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1027 18:57:05.932685   63277 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-864929 localhost] and IPs [192.168.39.216 127.0.0.1 ::1]
	I1027 18:57:05.932782   63277 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 18:57:05.932893   63277 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 18:57:05.932942   63277 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1027 18:57:05.932998   63277 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 18:57:05.933056   63277 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 18:57:05.933116   63277 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 18:57:05.933163   63277 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 18:57:05.933242   63277 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 18:57:05.933312   63277 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 18:57:05.933416   63277 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 18:57:05.933518   63277 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 18:57:05.934838   63277 out.go:252]   - Booting up control plane ...
	I1027 18:57:05.934938   63277 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 18:57:05.935072   63277 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 18:57:05.935153   63277 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 18:57:05.935254   63277 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 18:57:05.935331   63277 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 18:57:05.935413   63277 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 18:57:05.935480   63277 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 18:57:05.935513   63277 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1027 18:57:05.935618   63277 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 18:57:05.935705   63277 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 18:57:05.935754   63277 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.502502542s
	I1027 18:57:05.935827   63277 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 18:57:05.935892   63277 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.216:8443/livez
	I1027 18:57:05.935992   63277 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 18:57:05.936113   63277 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 18:57:05.936221   63277 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.069256284s
	I1027 18:57:05.936298   63277 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.735103952s
	I1027 18:57:05.936363   63277 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.003425011s
	I1027 18:57:05.936455   63277 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 18:57:05.936590   63277 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 18:57:05.936648   63277 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 18:57:05.936807   63277 kubeadm.go:318] [mark-control-plane] Marking the node addons-864929 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 18:57:05.936859   63277 kubeadm.go:318] [bootstrap-token] Using token: s2v11a.htd6rq4ivxisd01i
	I1027 18:57:05.938605   63277 out.go:252]   - Configuring RBAC rules ...
	I1027 18:57:05.938701   63277 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 18:57:05.938793   63277 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 18:57:05.938934   63277 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 18:57:05.939090   63277 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 18:57:05.939208   63277 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 18:57:05.939282   63277 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 18:57:05.939396   63277 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 18:57:05.939437   63277 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1027 18:57:05.939494   63277 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1027 18:57:05.939501   63277 kubeadm.go:318] 
	I1027 18:57:05.939571   63277 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1027 18:57:05.939578   63277 kubeadm.go:318] 
	I1027 18:57:05.939688   63277 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1027 18:57:05.939702   63277 kubeadm.go:318] 
	I1027 18:57:05.939738   63277 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1027 18:57:05.939802   63277 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 18:57:05.939870   63277 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 18:57:05.939883   63277 kubeadm.go:318] 
	I1027 18:57:05.939933   63277 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1027 18:57:05.939939   63277 kubeadm.go:318] 
	I1027 18:57:05.939985   63277 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 18:57:05.939991   63277 kubeadm.go:318] 
	I1027 18:57:05.940048   63277 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1027 18:57:05.940134   63277 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 18:57:05.940215   63277 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 18:57:05.940222   63277 kubeadm.go:318] 
	I1027 18:57:05.940329   63277 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 18:57:05.940400   63277 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1027 18:57:05.940406   63277 kubeadm.go:318] 
	I1027 18:57:05.940470   63277 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token s2v11a.htd6rq4ivxisd01i \
	I1027 18:57:05.940553   63277 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ab9d04ec7d88165f854ca6007f0db50cb21d439f87063d47c1cf645e122a460a \
	I1027 18:57:05.940572   63277 kubeadm.go:318] 	--control-plane 
	I1027 18:57:05.940578   63277 kubeadm.go:318] 
	I1027 18:57:05.940643   63277 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1027 18:57:05.940649   63277 kubeadm.go:318] 
	I1027 18:57:05.940731   63277 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token s2v11a.htd6rq4ivxisd01i \
	I1027 18:57:05.940833   63277 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ab9d04ec7d88165f854ca6007f0db50cb21d439f87063d47c1cf645e122a460a 
	I1027 18:57:05.940844   63277 cni.go:84] Creating CNI manager for ""
	I1027 18:57:05.940851   63277 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 18:57:05.943012   63277 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1027 18:57:05.944248   63277 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1027 18:57:05.965148   63277 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1027 18:57:05.989594   63277 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 18:57:05.989700   63277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:05.989727   63277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-864929 minikube.k8s.io/updated_at=2025_10_27T18_57_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f minikube.k8s.io/name=addons-864929 minikube.k8s.io/primary=true
	I1027 18:57:06.017183   63277 ops.go:34] apiserver oom_adj: -16
	I1027 18:57:06.172167   63277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:06.672287   63277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:07.173180   63277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:07.673264   63277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:08.172481   63277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:08.672997   63277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:09.173247   63277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:09.672863   63277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:10.172654   63277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:10.270470   63277 kubeadm.go:1113] duration metric: took 4.280852325s to wait for elevateKubeSystemPrivileges
	I1027 18:57:10.270507   63277 kubeadm.go:402] duration metric: took 18.371048599s to StartCluster
	I1027 18:57:10.270544   63277 settings.go:142] acquiring lock: {Name:mk19a39086427cb47b9bb78fd0b5176c91a751d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:10.270695   63277 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-58821/kubeconfig
	I1027 18:57:10.271083   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/kubeconfig: {Name:mk90c4d883178b7191d62a8cd99434bc24dd555f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:10.271332   63277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 18:57:10.271363   63277 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.216 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 18:57:10.271434   63277 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1027 18:57:10.271577   63277 addons.go:69] Setting yakd=true in profile "addons-864929"
	I1027 18:57:10.271588   63277 addons.go:69] Setting inspektor-gadget=true in profile "addons-864929"
	I1027 18:57:10.271607   63277 addons.go:238] Setting addon yakd=true in "addons-864929"
	I1027 18:57:10.271624   63277 addons.go:238] Setting addon inspektor-gadget=true in "addons-864929"
	I1027 18:57:10.271619   63277 addons.go:69] Setting default-storageclass=true in profile "addons-864929"
	I1027 18:57:10.271636   63277 addons.go:69] Setting registry-creds=true in profile "addons-864929"
	I1027 18:57:10.271644   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.271653   63277 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-864929"
	I1027 18:57:10.271661   63277 addons.go:69] Setting metrics-server=true in profile "addons-864929"
	I1027 18:57:10.271672   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.271678   63277 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-864929"
	I1027 18:57:10.271688   63277 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-864929"
	I1027 18:57:10.271662   63277 addons.go:69] Setting ingress=true in profile "addons-864929"
	I1027 18:57:10.271718   63277 addons.go:238] Setting addon ingress=true in "addons-864929"
	I1027 18:57:10.271723   63277 addons.go:69] Setting registry=true in profile "addons-864929"
	I1027 18:57:10.271735   63277 addons.go:238] Setting addon registry=true in "addons-864929"
	I1027 18:57:10.271751   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.271781   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.271779   63277 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-864929"
	I1027 18:57:10.271801   63277 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-864929"
	I1027 18:57:10.272335   63277 addons.go:69] Setting ingress-dns=true in profile "addons-864929"
	I1027 18:57:10.272359   63277 addons.go:238] Setting addon ingress-dns=true in "addons-864929"
	I1027 18:57:10.272388   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.272662   63277 addons.go:69] Setting storage-provisioner=true in profile "addons-864929"
	I1027 18:57:10.272684   63277 addons.go:238] Setting addon storage-provisioner=true in "addons-864929"
	I1027 18:57:10.272703   63277 addons.go:69] Setting volcano=true in profile "addons-864929"
	I1027 18:57:10.272719   63277 addons.go:69] Setting volumesnapshots=true in profile "addons-864929"
	I1027 18:57:10.272728   63277 addons.go:238] Setting addon volcano=true in "addons-864929"
	I1027 18:57:10.272731   63277 addons.go:238] Setting addon volumesnapshots=true in "addons-864929"
	I1027 18:57:10.272747   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.272709   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.271622   63277 config.go:182] Loaded profile config "addons-864929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:57:10.272915   63277 addons.go:69] Setting cloud-spanner=true in profile "addons-864929"
	I1027 18:57:10.272937   63277 addons.go:238] Setting addon cloud-spanner=true in "addons-864929"
	I1027 18:57:10.271674   63277 addons.go:238] Setting addon metrics-server=true in "addons-864929"
	I1027 18:57:10.272967   63277 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-864929"
	I1027 18:57:10.272979   63277 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-864929"
	I1027 18:57:10.272994   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.272962   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.271656   63277 addons.go:238] Setting addon registry-creds=true in "addons-864929"
	I1027 18:57:10.273353   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.271719   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.273804   63277 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-864929"
	I1027 18:57:10.272992   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.273875   63277 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-864929"
	I1027 18:57:10.273876   63277 out.go:179] * Verifying Kubernetes components...
	I1027 18:57:10.273923   63277 addons.go:69] Setting gcp-auth=true in profile "addons-864929"
	I1027 18:57:10.273943   63277 mustload.go:65] Loading cluster: addons-864929
	I1027 18:57:10.272753   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.273912   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.274165   63277 config.go:182] Loaded profile config "addons-864929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:57:10.275351   63277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 18:57:10.280649   63277 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-864929"
	I1027 18:57:10.280696   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.280648   63277 addons.go:238] Setting addon default-storageclass=true in "addons-864929"
	I1027 18:57:10.280792   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.281457   63277 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1027 18:57:10.281464   63277 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1027 18:57:10.281464   63277 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1027 18:57:10.281472   63277 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1027 18:57:10.281475   63277 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1027 18:57:10.282784   63277 host.go:66] Checking if "addons-864929" exists ...
	W1027 18:57:10.283458   63277 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1027 18:57:10.284391   63277 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 18:57:10.284413   63277 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 18:57:10.284781   63277 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1027 18:57:10.284783   63277 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1027 18:57:10.284784   63277 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 18:57:10.284827   63277 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1027 18:57:10.285656   63277 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1027 18:57:10.285668   63277 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1027 18:57:10.285667   63277 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1027 18:57:10.286196   63277 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1027 18:57:10.285679   63277 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1027 18:57:10.285694   63277 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1027 18:57:10.285702   63277 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1027 18:57:10.285712   63277 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1027 18:57:10.285727   63277 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1027 18:57:10.285763   63277 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1027 18:57:10.286475   63277 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1027 18:57:10.287027   63277 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1027 18:57:10.287211   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1027 18:57:10.286535   63277 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1027 18:57:10.287783   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1027 18:57:10.287353   63277 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 18:57:10.287361   63277 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 18:57:10.288659   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 18:57:10.287367   63277 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1027 18:57:10.288803   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1027 18:57:10.288238   63277 out.go:179]   - Using image docker.io/registry:3.0.0
	I1027 18:57:10.288289   63277 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1027 18:57:10.289099   63277 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1027 18:57:10.289112   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1027 18:57:10.288511   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.289111   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1027 18:57:10.289229   63277 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1027 18:57:10.289243   63277 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1027 18:57:10.289702   63277 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1027 18:57:10.289754   63277 out.go:179]   - Using image docker.io/busybox:stable
	I1027 18:57:10.289812   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1027 18:57:10.290341   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.290649   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.291093   63277 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1027 18:57:10.291547   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.292319   63277 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 18:57:10.292764   63277 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1027 18:57:10.292901   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1027 18:57:10.293496   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.294077   63277 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1027 18:57:10.294199   63277 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1027 18:57:10.294235   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1027 18:57:10.294665   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.294862   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.294885   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.295658   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.296760   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.296778   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.296804   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.297672   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.298250   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.298288   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.298666   63277 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1027 18:57:10.298926   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.299336   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.299404   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.300642   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.301088   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.301156   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.301344   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.301372   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.301519   63277 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1027 18:57:10.301767   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.301854   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.302100   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.302209   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.302286   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.302408   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.302456   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.302745   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.302894   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.303125   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.303161   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.303130   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.303303   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.303406   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.303460   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.303507   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.303830   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.304098   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.304190   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.304220   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.304324   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.304342   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.304762   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.304791   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.304845   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.305002   63277 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1027 18:57:10.305135   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.305171   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.305224   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.305423   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.305445   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.305836   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.305863   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.306116   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.307841   63277 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1027 18:57:10.309061   63277 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1027 18:57:10.310163   63277 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1027 18:57:10.310201   63277 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1027 18:57:10.312870   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.313280   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.313301   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.313464   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	W1027 18:57:10.535116   63277 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54864->192.168.39.216:22: read: connection reset by peer
	I1027 18:57:10.535158   63277 retry.go:31] will retry after 369.415138ms: ssh: handshake failed: read tcp 192.168.39.1:54864->192.168.39.216:22: read: connection reset by peer
	W1027 18:57:10.541619   63277 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54870->192.168.39.216:22: read: connection reset by peer
	I1027 18:57:10.541652   63277 retry.go:31] will retry after 219.162578ms: ssh: handshake failed: read tcp 192.168.39.1:54870->192.168.39.216:22: read: connection reset by peer
	I1027 18:57:10.985109   63277 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1027 18:57:10.985150   63277 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1027 18:57:11.132247   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1027 18:57:11.138615   63277 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1027 18:57:11.138646   63277 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1027 18:57:11.143955   63277 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1027 18:57:11.143981   63277 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1027 18:57:11.155121   63277 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1027 18:57:11.155156   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1027 18:57:11.157384   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1027 18:57:11.170100   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1027 18:57:11.321437   63277 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:11.321472   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1027 18:57:11.329006   63277 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.057632855s)
	I1027 18:57:11.329090   63277 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.053707515s)
	I1027 18:57:11.329177   63277 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 18:57:11.329278   63277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 18:57:11.351194   63277 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1027 18:57:11.351228   63277 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1027 18:57:11.372537   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 18:57:11.394769   63277 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1027 18:57:11.394810   63277 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1027 18:57:11.396018   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1027 18:57:11.456333   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 18:57:11.584380   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1027 18:57:11.712662   63277 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1027 18:57:11.712687   63277 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1027 18:57:11.735201   63277 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1027 18:57:11.735231   63277 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1027 18:57:11.839761   63277 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1027 18:57:11.839788   63277 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1027 18:57:11.900683   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:12.042980   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1027 18:57:12.058451   63277 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1027 18:57:12.058490   63277 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1027 18:57:12.070398   63277 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1027 18:57:12.070429   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1027 18:57:12.354158   63277 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1027 18:57:12.354199   63277 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1027 18:57:12.362109   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1027 18:57:12.365612   63277 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1027 18:57:12.365648   63277 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1027 18:57:12.438920   63277 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1027 18:57:12.438943   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1027 18:57:12.700463   63277 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1027 18:57:12.700490   63277 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1027 18:57:12.700500   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1027 18:57:12.840634   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1027 18:57:12.856064   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1027 18:57:12.902734   63277 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1027 18:57:12.902762   63277 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1027 18:57:13.137669   63277 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1027 18:57:13.137698   63277 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1027 18:57:13.351985   63277 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1027 18:57:13.352016   63277 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1027 18:57:13.596268   63277 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1027 18:57:13.596294   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1027 18:57:13.714853   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.582551362s)
	I1027 18:57:13.850557   63277 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1027 18:57:13.850595   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1027 18:57:14.071067   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1027 18:57:14.389873   63277 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1027 18:57:14.389897   63277 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1027 18:57:14.901480   63277 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1027 18:57:14.901504   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1027 18:57:15.349961   63277 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1027 18:57:15.349990   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1027 18:57:15.716286   63277 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1027 18:57:15.716315   63277 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1027 18:57:16.040523   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1027 18:57:17.129847   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.972419936s)
	I1027 18:57:17.129872   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.959737081s)
	I1027 18:57:17.129940   63277 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.80062985s)
	I1027 18:57:17.129973   63277 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1027 18:57:17.129951   63277 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.800751256s)
	I1027 18:57:17.130902   63277 node_ready.go:35] waiting up to 6m0s for node "addons-864929" to be "Ready" ...
	I1027 18:57:17.155377   63277 node_ready.go:49] node "addons-864929" is "Ready"
	I1027 18:57:17.155425   63277 node_ready.go:38] duration metric: took 24.493356ms for node "addons-864929" to be "Ready" ...
	I1027 18:57:17.155441   63277 api_server.go:52] waiting for apiserver process to appear ...
	I1027 18:57:17.155509   63277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 18:57:17.249988   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.877396986s)
	I1027 18:57:17.250062   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.854018331s)
	I1027 18:57:17.250127   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.793752896s)
	I1027 18:57:17.250185   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.665778222s)
	I1027 18:57:17.686081   63277 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-864929" context rescaled to 1 replicas
	I1027 18:57:17.769830   63277 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1027 18:57:17.773614   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:17.774163   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:17.774193   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:17.774409   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:17.835030   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.934303033s)
	W1027 18:57:17.835104   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:17.835131   63277 retry.go:31] will retry after 292.877887ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:18.055795   63277 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1027 18:57:18.104947   63277 addons.go:238] Setting addon gcp-auth=true in "addons-864929"
	I1027 18:57:18.105010   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:18.106942   63277 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1027 18:57:18.109558   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:18.110007   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:18.110059   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:18.110215   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:18.128649   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:19.900432   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.538276397s)
	I1027 18:57:19.900485   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.199953129s)
	I1027 18:57:19.900517   63277 addons.go:479] Verifying addon registry=true in "addons-864929"
	I1027 18:57:19.900644   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.059975315s)
	I1027 18:57:19.900669   63277 addons.go:479] Verifying addon metrics-server=true in "addons-864929"
	I1027 18:57:19.900741   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.044632307s)
	I1027 18:57:19.902352   63277 out.go:179] * Verifying registry addon...
	I1027 18:57:19.902350   63277 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-864929 service yakd-dashboard -n yakd-dashboard
	
	I1027 18:57:19.903449   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.860430632s)
	I1027 18:57:19.903482   63277 addons.go:479] Verifying addon ingress=true in "addons-864929"
	I1027 18:57:19.905028   63277 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1027 18:57:19.905292   63277 out.go:179] * Verifying ingress addon...
	I1027 18:57:19.907320   63277 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1027 18:57:19.958238   63277 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1027 18:57:19.958265   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:19.958292   63277 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1027 18:57:19.958311   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:20.433182   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:20.434585   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:20.543543   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.472423735s)
	W1027 18:57:20.543599   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1027 18:57:20.543627   63277 retry.go:31] will retry after 255.689771ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1027 18:57:20.800094   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1027 18:57:20.922952   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:20.923554   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:21.442922   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:21.442981   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:21.773578   63277 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.618035898s)
	I1027 18:57:21.773618   63277 api_server.go:72] duration metric: took 11.502220917s to wait for apiserver process to appear ...
	I1027 18:57:21.773628   63277 api_server.go:88] waiting for apiserver healthz status ...
	I1027 18:57:21.773654   63277 api_server.go:253] Checking apiserver healthz at https://192.168.39.216:8443/healthz ...
	I1027 18:57:21.774535   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.733957112s)
	I1027 18:57:21.774578   63277 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-864929"
	I1027 18:57:21.776672   63277 out.go:179] * Verifying csi-hostpath-driver addon...
	I1027 18:57:21.779451   63277 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1027 18:57:21.792875   63277 api_server.go:279] https://192.168.39.216:8443/healthz returned 200:
	ok
	I1027 18:57:21.806882   63277 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1027 18:57:21.806906   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:21.811185   63277 api_server.go:141] control plane version: v1.34.1
	I1027 18:57:21.811218   63277 api_server.go:131] duration metric: took 37.583056ms to wait for apiserver health ...
	I1027 18:57:21.811241   63277 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 18:57:21.837867   63277 system_pods.go:59] 20 kube-system pods found
	I1027 18:57:21.837924   63277 system_pods.go:61] "amd-gpu-device-plugin-zg4tw" [26b73888-1e70-456d-ab70-4392ce52af26] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1027 18:57:21.837935   63277 system_pods.go:61] "coredns-66bc5c9577-5v77t" [13dc8b33-a53f-4df7-8cea-be41471727fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 18:57:21.837946   63277 system_pods.go:61] "coredns-66bc5c9577-f8dfl" [7ada2d5f-c124-4130-8e4d-f5f6f0d2b856] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 18:57:21.837954   63277 system_pods.go:61] "csi-hostpath-attacher-0" [923a1bc3-3658-4c92-9ac7-f6fb7cc49fdc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 18:57:21.837960   63277 system_pods.go:61] "csi-hostpath-resizer-0" [2d2edb44-d6fd-41c7-aebc-45f7051be9b9] Pending
	I1027 18:57:21.837970   63277 system_pods.go:61] "csi-hostpathplugin-2kk6q" [4df09867-d21a-494d-b1c1-b33d1ae05292] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 18:57:21.837976   63277 system_pods.go:61] "etcd-addons-864929" [0423c9dd-5674-4e91-be68-a3255c87fce6] Running
	I1027 18:57:21.837982   63277 system_pods.go:61] "kube-apiserver-addons-864929" [b43be527-80f0-4d18-8362-54d51f1f3a19] Running
	I1027 18:57:21.837987   63277 system_pods.go:61] "kube-controller-manager-addons-864929" [f65a9a0f-0799-4414-87de-291236ac723d] Running
	I1027 18:57:21.837995   63277 system_pods.go:61] "kube-ingress-dns-minikube" [66c0967e-2aba-46db-9b8d-50afb9e508c4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 18:57:21.838001   63277 system_pods.go:61] "kube-proxy-5grdt" [73ab29d4-f3af-4942-87b0-5b146ec49fd2] Running
	I1027 18:57:21.838010   63277 system_pods.go:61] "kube-scheduler-addons-864929" [ac2cfd72-7a4b-46a5-b8fc-d1b7552feb30] Running
	I1027 18:57:21.838017   63277 system_pods.go:61] "metrics-server-85b7d694d7-7z96j" [332bcd8d-855b-409e-8a4c-c788da3ed019] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 18:57:21.838026   63277 system_pods.go:61] "nvidia-device-plugin-daemonset-dq69s" [7048c489-be31-4c98-a8ea-455c9506a937] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 18:57:21.838050   63277 system_pods.go:61] "registry-6b586f9694-wrthd" [cfcc8422-d46c-42b9-a799-37210505af96] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 18:57:21.838063   63277 system_pods.go:61] "registry-creds-764b6fb674-g7z85" [b7d5c5d1-64ba-4adf-b61a-42be8e53ce7e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 18:57:21.838072   63277 system_pods.go:61] "registry-proxy-6grgg" [80e2894b-b354-44d6-8c93-8c9a8f5ec644] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1027 18:57:21.838085   63277 system_pods.go:61] "snapshot-controller-7d9fbc56b8-9nfvf" [e133be4d-c9ac-45ee-8523-3197eb5ae1dd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:57:21.838099   63277 system_pods.go:61] "snapshot-controller-7d9fbc56b8-t78cg" [07e1f13e-a7d4-496f-9f63-f96306459e61] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:57:21.838111   63277 system_pods.go:61] "storage-provisioner" [1ec5b960-2f51-438a-9968-46e1bea6ddc7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 18:57:21.838126   63277 system_pods.go:74] duration metric: took 26.872544ms to wait for pod list to return data ...
	I1027 18:57:21.838141   63277 default_sa.go:34] waiting for default service account to be created ...
	I1027 18:57:21.867654   63277 default_sa.go:45] found service account: "default"
	I1027 18:57:21.867680   63277 default_sa.go:55] duration metric: took 29.532579ms for default service account to be created ...
	I1027 18:57:21.867689   63277 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 18:57:21.883210   63277 system_pods.go:86] 20 kube-system pods found
	I1027 18:57:21.883247   63277 system_pods.go:89] "amd-gpu-device-plugin-zg4tw" [26b73888-1e70-456d-ab70-4392ce52af26] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1027 18:57:21.883257   63277 system_pods.go:89] "coredns-66bc5c9577-5v77t" [13dc8b33-a53f-4df7-8cea-be41471727fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 18:57:21.883266   63277 system_pods.go:89] "coredns-66bc5c9577-f8dfl" [7ada2d5f-c124-4130-8e4d-f5f6f0d2b856] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 18:57:21.883272   63277 system_pods.go:89] "csi-hostpath-attacher-0" [923a1bc3-3658-4c92-9ac7-f6fb7cc49fdc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 18:57:21.883278   63277 system_pods.go:89] "csi-hostpath-resizer-0" [2d2edb44-d6fd-41c7-aebc-45f7051be9b9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1027 18:57:21.883294   63277 system_pods.go:89] "csi-hostpathplugin-2kk6q" [4df09867-d21a-494d-b1c1-b33d1ae05292] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 18:57:21.883301   63277 system_pods.go:89] "etcd-addons-864929" [0423c9dd-5674-4e91-be68-a3255c87fce6] Running
	I1027 18:57:21.883308   63277 system_pods.go:89] "kube-apiserver-addons-864929" [b43be527-80f0-4d18-8362-54d51f1f3a19] Running
	I1027 18:57:21.883313   63277 system_pods.go:89] "kube-controller-manager-addons-864929" [f65a9a0f-0799-4414-87de-291236ac723d] Running
	I1027 18:57:21.883326   63277 system_pods.go:89] "kube-ingress-dns-minikube" [66c0967e-2aba-46db-9b8d-50afb9e508c4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 18:57:21.883331   63277 system_pods.go:89] "kube-proxy-5grdt" [73ab29d4-f3af-4942-87b0-5b146ec49fd2] Running
	I1027 18:57:21.883339   63277 system_pods.go:89] "kube-scheduler-addons-864929" [ac2cfd72-7a4b-46a5-b8fc-d1b7552feb30] Running
	I1027 18:57:21.883347   63277 system_pods.go:89] "metrics-server-85b7d694d7-7z96j" [332bcd8d-855b-409e-8a4c-c788da3ed019] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 18:57:21.883358   63277 system_pods.go:89] "nvidia-device-plugin-daemonset-dq69s" [7048c489-be31-4c98-a8ea-455c9506a937] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 18:57:21.883365   63277 system_pods.go:89] "registry-6b586f9694-wrthd" [cfcc8422-d46c-42b9-a799-37210505af96] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 18:57:21.883372   63277 system_pods.go:89] "registry-creds-764b6fb674-g7z85" [b7d5c5d1-64ba-4adf-b61a-42be8e53ce7e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 18:57:21.883378   63277 system_pods.go:89] "registry-proxy-6grgg" [80e2894b-b354-44d6-8c93-8c9a8f5ec644] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1027 18:57:21.883383   63277 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9nfvf" [e133be4d-c9ac-45ee-8523-3197eb5ae1dd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:57:21.883388   63277 system_pods.go:89] "snapshot-controller-7d9fbc56b8-t78cg" [07e1f13e-a7d4-496f-9f63-f96306459e61] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:57:21.883393   63277 system_pods.go:89] "storage-provisioner" [1ec5b960-2f51-438a-9968-46e1bea6ddc7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 18:57:21.883404   63277 system_pods.go:126] duration metric: took 15.70908ms to wait for k8s-apps to be running ...
	I1027 18:57:21.883416   63277 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 18:57:21.883474   63277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 18:57:21.924022   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:21.927212   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:22.158899   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.03020142s)
	I1027 18:57:22.158954   63277 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.051987547s)
	W1027 18:57:22.158980   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:22.159006   63277 retry.go:31] will retry after 279.686083ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:22.160959   63277 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 18:57:22.162547   63277 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1027 18:57:22.164115   63277 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1027 18:57:22.164141   63277 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1027 18:57:22.261201   63277 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1027 18:57:22.261230   63277 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1027 18:57:22.288886   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:22.352572   63277 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1027 18:57:22.352609   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1027 18:57:22.439692   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:22.441909   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1027 18:57:22.481468   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:22.481666   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:22.788128   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:22.914985   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:22.915276   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:23.285377   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:23.418349   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:23.418666   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:23.583239   63277 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.699734966s)
	I1027 18:57:23.583281   63277 system_svc.go:56] duration metric: took 1.699860035s WaitForService to wait for kubelet
	I1027 18:57:23.583292   63277 kubeadm.go:586] duration metric: took 13.311893893s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 18:57:23.583319   63277 node_conditions.go:102] verifying NodePressure condition ...
	I1027 18:57:23.583423   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.783267207s)
	I1027 18:57:23.593344   63277 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1027 18:57:23.593372   63277 node_conditions.go:123] node cpu capacity is 2
	I1027 18:57:23.593391   63277 node_conditions.go:105] duration metric: took 10.067491ms to run NodePressure ...
	I1027 18:57:23.593404   63277 start.go:241] waiting for startup goroutines ...
	I1027 18:57:23.787519   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:23.924794   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:23.924888   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:24.290306   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.848359661s)
	I1027 18:57:24.291626   63277 addons.go:479] Verifying addon gcp-auth=true in "addons-864929"
	I1027 18:57:24.294508   63277 out.go:179] * Verifying gcp-auth addon...
	I1027 18:57:24.296641   63277 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1027 18:57:24.328761   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:24.328910   63277 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1027 18:57:24.328951   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:24.413802   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:24.416333   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:24.786549   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:24.805212   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:24.915701   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:24.921802   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:25.061422   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.621678381s)
	W1027 18:57:25.061478   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:25.061503   63277 retry.go:31] will retry after 804.946825ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:25.289162   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:25.301160   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:25.421590   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:25.423412   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:25.785953   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:25.802888   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:25.867047   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:25.919138   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:25.919440   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:26.286933   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:26.301794   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:26.417105   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:26.417267   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:26.785587   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:26.804169   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:26.908637   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:26.912996   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:27.288028   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:27.300864   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:27.412910   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:27.416533   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:27.456859   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.589752651s)
	W1027 18:57:27.456908   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:27.456932   63277 retry.go:31] will retry after 685.459936ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:27.784840   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:27.801850   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:27.910590   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:27.912874   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:28.143005   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:28.285631   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:28.300220   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:28.419303   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:28.422363   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:28.784623   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:28.802401   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:28.911601   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:28.915428   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:29.283493   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:29.300718   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:29.364540   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.221494949s)
	W1027 18:57:29.364577   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:29.364611   63277 retry.go:31] will retry after 1.757799431s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:29.416322   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:29.418953   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:29.787868   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:29.799055   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:29.910571   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:29.914273   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:30.286180   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:30.303999   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:30.413104   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:30.416370   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:30.787744   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:30.803419   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:30.916360   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:30.919438   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:31.122558   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:31.285676   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:31.301308   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:31.411868   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:31.412485   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:31.787290   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:31.802700   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:31.913644   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:31.915831   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:32.286432   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:32.304334   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:32.374445   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.251833687s)
	W1027 18:57:32.374511   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:32.374541   63277 retry.go:31] will retry after 2.78595925s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:32.416811   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:32.416913   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:32.785363   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:32.804140   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:32.915420   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:32.916567   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:33.292316   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:33.303111   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:33.464111   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:33.464335   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:33.784707   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:33.803523   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:33.909242   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:33.911455   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:34.303435   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:34.303506   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:34.413609   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:34.417021   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:34.784372   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:34.802229   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:34.911142   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:34.916104   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:35.161393   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:35.283283   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:35.301025   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:35.410195   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:35.416262   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:36.146770   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:36.157278   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:36.158333   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:36.158723   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:36.286639   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:36.300897   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:36.418783   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:36.423389   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:36.618778   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.457337067s)
	W1027 18:57:36.618824   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:36.618849   63277 retry.go:31] will retry after 2.808126494s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:36.785856   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:36.800053   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:36.911223   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:36.913610   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:37.283520   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:37.300915   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:37.411384   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:37.411564   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:37.783128   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:37.801353   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:37.908775   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:37.911143   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:38.284488   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:38.302812   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:38.423418   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:38.423531   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:38.784017   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:38.800264   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:38.911392   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:38.912809   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:39.284702   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:39.302513   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:39.414232   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:39.414350   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:39.427461   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:39.837291   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:39.837565   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:39.910765   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:39.914552   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:40.287903   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:40.301760   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:40.416079   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:40.416206   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:40.448955   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.021449854s)
	W1027 18:57:40.449007   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:40.449046   63277 retry.go:31] will retry after 2.389005779s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:40.785654   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:40.802757   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:40.913550   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:40.914781   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:41.286164   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:41.300417   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:41.408904   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:41.411315   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:41.783667   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:41.801000   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:41.911341   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:41.911526   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:42.283379   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:42.300298   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:42.413464   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:42.413759   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:42.784936   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:42.801747   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:42.838978   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:42.914433   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:42.915753   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:43.284491   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:43.306054   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:43.410133   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:43.414779   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:43.787454   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:43.802514   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:43.914613   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:43.915563   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:44.044025   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.205001809s)
	W1027 18:57:44.044086   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:44.044113   63277 retry.go:31] will retry after 6.569226607s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:44.286635   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:44.301882   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:44.420149   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:44.420239   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:44.786772   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:44.801152   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:44.907893   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:44.912659   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:45.282844   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:45.299210   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:45.408847   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:45.415564   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:45.785932   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:45.799703   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:45.910796   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:45.912722   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:46.284380   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:46.300262   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:46.411586   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:46.413618   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:46.785774   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:46.802487   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:46.909401   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:46.911157   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:47.285427   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:47.301018   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:47.411570   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:47.415374   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:47.784426   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:47.800958   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:47.909404   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:47.911321   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:48.285898   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:48.301526   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:48.409153   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:48.420016   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:48.784072   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:48.799905   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:48.910147   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:48.911420   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:49.283552   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:49.301303   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:49.413410   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:49.413468   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:49.785136   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:49.803428   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:49.912135   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:49.918025   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:50.284843   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:50.300698   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:50.417847   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:50.418870   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:50.614173   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:50.785558   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:50.803089   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:50.912911   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:50.914476   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:51.285211   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:51.299828   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:51.410597   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:51.417162   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:51.760476   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.146250047s)
	W1027 18:57:51.760537   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:51.760566   63277 retry.go:31] will retry after 8.458351618s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:51.788367   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:51.802674   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:51.912952   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:51.915907   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:52.284979   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:52.302620   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:52.417553   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:52.422725   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:52.785476   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:52.801653   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:52.911126   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:52.911882   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:53.286067   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:53.300801   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:53.418960   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:53.420629   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:53.851794   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:53.853714   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:53.922918   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:53.923746   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:54.287898   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:54.302372   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:54.425848   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:54.426641   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:54.792214   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:54.801130   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:54.915252   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:54.915642   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:55.283583   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:55.304005   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:55.408097   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:55.413323   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:55.784488   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:55.806326   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:55.913127   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:55.915413   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:56.427055   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:56.427252   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:56.427310   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:56.428375   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:56.787593   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:56.888446   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:56.912008   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:56.913074   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:57.288183   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:57.305878   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:57.417164   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:57.418270   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:57.784210   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:57.802894   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:57.909720   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:57.912051   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:58.285258   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:58.300454   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:58.412828   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:58.414479   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:58.784411   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:58.801492   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:58.911089   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:58.912058   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:59.283993   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:59.299989   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:59.412668   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:59.419029   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:59.784705   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:59.804623   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:59.909691   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:59.912501   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:00.220065   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:58:00.284147   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:00.302108   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:00.416685   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:00.418642   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:00.786304   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:00.803095   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:00.911931   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:00.915399   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:01.286093   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:01.301584   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:01.412443   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:01.414896   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:01.458011   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.237894856s)
	W1027 18:58:01.458080   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:01.458103   63277 retry.go:31] will retry after 16.405228739s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:01.784222   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:01.803092   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:01.908661   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:01.910814   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:02.284729   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:02.302770   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:02.414874   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:02.414965   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:02.789864   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:02.800637   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:02.914649   63277 kapi.go:107] duration metric: took 43.009618954s to wait for kubernetes.io/minikube-addons=registry ...
	I1027 18:58:02.914893   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:03.286072   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:03.299857   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:03.418386   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:03.791799   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:03.803302   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:03.914538   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:04.286257   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:04.302605   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:04.416367   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:04.783206   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:04.867278   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:04.911899   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:05.285072   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:05.300843   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:05.414023   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:05.785545   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:05.803246   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:05.924390   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:06.284685   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:06.301604   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:06.415639   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:06.786150   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:06.886295   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:06.912165   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:07.284913   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:07.302714   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:07.412538   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:07.787904   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:07.801832   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:07.911724   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:08.282968   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:08.300993   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:08.414821   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:08.786690   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:08.803923   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:08.911877   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:09.297222   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:09.301996   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:09.422572   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:09.788150   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:09.805824   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:09.913774   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:10.293390   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:10.305508   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:10.420862   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:10.792615   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:10.802761   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:10.912280   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:11.288594   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:11.306089   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:11.417798   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:11.787690   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:11.802673   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:11.912590   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:12.284220   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:12.308323   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:12.414975   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:12.787839   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:12.800833   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:12.915221   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:13.540620   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:13.543249   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:13.543347   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:13.788031   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:13.805504   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:13.912643   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:14.288515   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:14.303121   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:14.425413   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:14.786082   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:14.800338   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:14.911089   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:15.290704   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:15.300954   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:15.415781   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:15.785268   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:15.801079   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:15.914809   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:16.284643   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:16.301478   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:16.425519   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:16.783788   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:16.802402   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:16.916061   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:17.289294   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:17.307167   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:17.426377   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:17.784384   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:17.800170   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:17.864299   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:58:17.913670   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:18.286332   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:18.302108   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:18.413514   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:18.786024   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:18.802816   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:18.911079   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:19.285445   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:19.389432   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:19.439230   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.57487824s)
	W1027 18:58:19.439294   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:19.439322   63277 retry.go:31] will retry after 19.626476762s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:19.486856   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:19.786120   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:19.806643   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:19.910901   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:20.287756   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:20.302427   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:20.418486   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:20.783960   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:20.800528   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:20.913267   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:21.285594   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:21.302211   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:21.420494   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:21.786759   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:21.804159   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:21.912377   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:22.283620   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:22.301149   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:22.427642   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:22.783574   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:22.802410   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:22.914836   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:23.288209   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:23.303010   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:23.421096   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:23.789207   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:23.808143   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:23.911641   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:24.286064   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:24.303547   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:24.425719   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:24.792130   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:24.801495   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:24.913750   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:25.289935   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:25.305864   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:25.432159   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:25.784691   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:25.803435   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:25.912224   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:26.285500   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:26.301355   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:26.418759   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:26.785783   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:26.810515   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:26.912606   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:27.284842   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:27.300596   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:27.415566   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:27.787354   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:27.800995   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:27.912310   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:28.284479   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:28.303281   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:28.419682   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:28.789550   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:28.800133   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:28.915291   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:29.288142   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:29.302992   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:29.418531   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:29.785066   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:29.800998   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:29.911612   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:30.287335   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:30.300823   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:30.414607   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:30.785353   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:30.801683   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:30.914771   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:31.286892   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:31.309512   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:31.413660   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:31.784745   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:31.804007   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:31.914073   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:32.285574   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:32.302369   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:32.415432   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:32.787607   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:32.801278   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:32.912924   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:33.286454   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:33.300583   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:33.413776   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:33.790802   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:33.808782   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:33.912972   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:34.286709   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:34.304110   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:34.420826   63277 kapi.go:107] duration metric: took 1m14.513497503s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1027 18:58:34.786102   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:34.801992   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:35.285498   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:35.301550   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:35.784165   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:35.800807   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:36.284911   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:36.299796   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:36.788910   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:36.804143   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:37.284496   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:37.302139   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:37.785508   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:37.802879   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:38.286869   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:38.300852   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:38.786222   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:38.804588   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:39.066915   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:58:39.318253   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:39.410241   63277 kapi.go:107] duration metric: took 1m15.113592039s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1027 18:58:39.412086   63277 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-864929 cluster.
	I1027 18:58:39.413383   63277 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1027 18:58:39.414377   63277 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1027 18:58:39.785506   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:40.146885   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.0799187s)
	W1027 18:58:40.146963   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1027 18:58:40.147096   63277 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1027 18:58:40.287330   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:40.782964   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:41.285147   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:41.783255   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:42.286213   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:42.785272   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:43.282878   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:43.789437   63277 kapi.go:107] duration metric: took 1m22.009986905s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1027 18:58:43.791464   63277 out.go:179] * Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, storage-provisioner, amd-gpu-device-plugin, registry-creds, default-storageclass, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1027 18:58:43.792829   63277 addons.go:514] duration metric: took 1m33.521403387s for enable addons: enabled=[nvidia-device-plugin ingress-dns cloud-spanner storage-provisioner amd-gpu-device-plugin registry-creds default-storageclass metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1027 18:58:43.792875   63277 start.go:246] waiting for cluster config update ...
	I1027 18:58:43.792913   63277 start.go:255] writing updated cluster config ...
	I1027 18:58:43.793226   63277 ssh_runner.go:195] Run: rm -f paused
	I1027 18:58:43.802235   63277 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 18:58:43.806653   63277 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-f8dfl" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:43.812431   63277 pod_ready.go:94] pod "coredns-66bc5c9577-f8dfl" is "Ready"
	I1027 18:58:43.812452   63277 pod_ready.go:86] duration metric: took 5.764753ms for pod "coredns-66bc5c9577-f8dfl" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:43.816160   63277 pod_ready.go:83] waiting for pod "etcd-addons-864929" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:43.821965   63277 pod_ready.go:94] pod "etcd-addons-864929" is "Ready"
	I1027 18:58:43.821993   63277 pod_ready.go:86] duration metric: took 5.807724ms for pod "etcd-addons-864929" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:43.824005   63277 pod_ready.go:83] waiting for pod "kube-apiserver-addons-864929" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:43.828898   63277 pod_ready.go:94] pod "kube-apiserver-addons-864929" is "Ready"
	I1027 18:58:43.828923   63277 pod_ready.go:86] duration metric: took 4.897075ms for pod "kube-apiserver-addons-864929" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:43.830643   63277 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-864929" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:44.207152   63277 pod_ready.go:94] pod "kube-controller-manager-addons-864929" is "Ready"
	I1027 18:58:44.207194   63277 pod_ready.go:86] duration metric: took 376.531709ms for pod "kube-controller-manager-addons-864929" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:44.415720   63277 pod_ready.go:83] waiting for pod "kube-proxy-5grdt" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:44.807579   63277 pod_ready.go:94] pod "kube-proxy-5grdt" is "Ready"
	I1027 18:58:44.807611   63277 pod_ready.go:86] duration metric: took 391.860267ms for pod "kube-proxy-5grdt" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:45.008299   63277 pod_ready.go:83] waiting for pod "kube-scheduler-addons-864929" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:45.409571   63277 pod_ready.go:94] pod "kube-scheduler-addons-864929" is "Ready"
	I1027 18:58:45.409599   63277 pod_ready.go:86] duration metric: took 401.265666ms for pod "kube-scheduler-addons-864929" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:45.409611   63277 pod_ready.go:40] duration metric: took 1.607328787s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 18:58:45.455187   63277 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 18:58:45.457073   63277 out.go:179] * Done! kubectl is now configured to use "addons-864929" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 19:01:38 addons-864929 crio[816]: time="2025-10-27 19:01:38.393820362Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7c017a15-9e55-4c21-83d5-2ae717512cf0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:01:38 addons-864929 crio[816]: time="2025-10-27 19:01:38.394022103Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7c017a15-9e55-4c21-83d5-2ae717512cf0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:01:38 addons-864929 crio[816]: time="2025-10-27 19:01:38.394728133Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:adaa598112f171df13e78cd56d399d5cea5583cbab9f70582c179853419f0a95,PodSandboxId:0e930ac960395a1fe60ce33b3d0d23e5074c5bcf2cfcf870738b45425fc094f3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761591555862014978,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e5f3a97-dcd1-44e6-920b-2953ee6ba066,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4aa82535ec10d187f1da703d58159fb09230f78d0581e0f49fbd4acd47482df,PodSandboxId:2e4a1f88f6c72c5d32f4d9fa16c7245440698c2e6c6940465c848ea8e3c1de72,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761591529033470607,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a668ad58-4082-4722-a352-3bd62c30df9b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a32d240f03f8f28e7d4e7a44d8c5ed0615b4f8a512dff263873f19db80541de,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1761591522582012711,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0067ca876ce6c8bdc5053fc40be27170f81485094511709910b16e143a9e2fc4,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1761591521027264059,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd7ab79c70b2b1fe050919ff1dc62a9bd2f43e52e74b896feb06973205b4c86,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1761591519397782799,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c
1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c291b0333c5db7a44ffbeef42ea3e322de328a2db3a212677c23a228d7be117,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1761591515004169043,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8806283adced3b462f1661f7b82158e81dfd3ee30abbda8de593e5214c7fbeb,PodSandboxId:d56d1c1bba09188f7a1825c4b896edc447234b5fff8d7d1dcbd81a9f40f9c5ff,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761591513353734196,Labels:map[string]string{io.kubernetes.container.name: co
ntroller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-k59b5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7b940d9c-daf7-43a0-965f-93d0278a1913,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ba505fec54a4152ff5b929051ca72258b3111a7e5ab73be1ad55ceec66f8fb66,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt
:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1761591505993152021,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b250e967b1910669d276d1a5519185d1aadfe512b72ec2b46eb44e2d08b2947c,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Meta
data:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1761591504435510194,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07110c7b3afc08381c95acb068cfe5fd71524933fa
f9a6815a6f33f2f28c14b5,PodSandboxId:d3fe0c8c9df1bf22576a6f62d4487ebe483778329f044aaf12442f36aefee1c9,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1761591502367296352,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2edb44-d6fd-41c7-aebc-45f7051be9b9,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f241dd9f7205d1dd0951
38c4d6b056bd582765d0b8e3d8bb89d772bfaae657ad,PodSandboxId:6429ac3aeaf4cf12d6b687f73be67ce1eb08e0da208ef1272ae3514a20ed0c84,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1761591500984769440,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 923a1bc3-3658-4c92-9ac7-f6fb7cc49fdc,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:1708c06c7e746239971631d28cc4118fcf7c6f5e0ff884e4193277f8d4fe1045,PodSandboxId:6813db443ad42f30ede5948e121225fd273a9767846892f5abfd1c7e67717754,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761591499124403764,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-9nfvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e133be4d-c9ac-45ee-8523-3197eb5ae1dd,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMes
sagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8f995b816e5743f53a114660fa4536960d4b413e08ae8c78b70a56be317652f,PodSandboxId:41f0fc88d88a67c53a3bd864e17466349b4c7cd2b1545a854eefac2ca9cec7a5,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761591498953436180,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-t78cg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07e1f13e-a7d4-496f-9f63-f96306459e61,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d0a056d958ba0e9df5b60dee569ae445476e826c493b74d7553383ff024320,PodSandboxId:1324ec30ac3236306c7c90094dbf02a0f3c0382beef5f98e4c3831943375aef5,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761591496535840408,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2ll76,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d59a4fd7-fa88-4810-be40-28b0fc1694d7,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f7b563f60d965e0e7630e1aee05c605209b846448c5c59f53c0a16a9f9d665d,PodSandboxId:bc410cbb40cb5902da6bdf34d8d5242293447cf15cac9039be7ba9684081f6aa,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1761591496414288177,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-xhp4p,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: aca554db-371c-4aad-9edb-8724e17ed917,},Annotations:map[string]string{io.kuber
netes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb2c43aca3bd88ad2b0df10f83323a46e0f850b0a9fa20cbddf8353f1fcdc4ab,PodSandboxId:3203c7b5e19abb3f2b904f6e61f87d3b2f43c4d0227398b77c6a1c87d83067df,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761591495239430024,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4qkgj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b597aeaa-6d3f-49c8-86a0-311d3f9d468f,},
Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11101a79fc0739766a9d1c4f24680be46354448f79c0965c8186d69396bd6de8,PodSandboxId:a34c89c3d97f4534833c02b1092cbe693acbc1d81b74de61458ce121608460c7,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761591486710960910,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-5bx7q,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: ef4b0394-4dee-4b23-bee8-0787117f056f,},A
nnotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:748185a242063201d993b2e869541776467517eaaa221985410ce6275695797c,PodSandboxId:d8de63c4ef02e976d3f83b9413c0bc5b2d60b40303b2a9552a9eb03cb001bba6,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761591478296820394,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.p
od.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c0967e-2aba-46db-9b8d-50afb9e508c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47c975912a90508c9994f2e3e844922ac61e9b8efd3e831e0addc0eeb3f78997,PodSandboxId:eb20897f30dfb36e8f3c34ea19074dcb418adfec59e5f0b0a7e7d7001d52924b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER
_RUNNING,CreatedAt:1761591461490696586,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-zg4tw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26b73888-1e70-456d-ab70-4392ce52af26,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9580ed2258f1ddc819f6b60b3c2ef2524bf0b58aa70e0aff2439347be11df4e9,PodSandboxId:d0de4be78d27d9e94647775771a19ac1751580111ae3739d05c71953c1faf14a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_R
UNNING,CreatedAt:1761591440550993928,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec5b960-2f51-438a-9968-46e1bea6ddc7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:378ab83eabeec92fc7bf1059eab8071d79c91a8ed0be14239fcda364f18c73e3,PodSandboxId:a87aa3850ab80908c43af3f2bbb9eca022489f0530ec2b8899475a9ac087e88d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761591
431971845716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-f8dfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ada2d5f-c124-4130-8e4d-f5f6f0d2b856,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c25a92cc96070b1a3ab5a630802cbe36b41664194b39b44b313e4f4f30c3e83c,PodSandboxId:1549458dc06ee22d63cae83ec65fb1b67f7fe3dd07b0cb03
5e9908c6a184cd2d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761591431223440952,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5grdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ab29d4-f3af-4942-87b0-5b146ec49fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:473b2a7d1d8d4e7a553f7f11a2d0384f3251123a5c4549760e65d8ec7b53033c,PodSandboxId:d582ed9677d49ebcc2ef56ec9d4db2cd633d5a4f0d9dbfb7d9840888bee96671,Metadata:&Contai
nerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761591418598524106,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853670e29e0053cd2968e4d42e8dcd57,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23a81c0c110d4803997f153b474b99fa2c8dd49df03bcd06e0deab806e84e522,PodSandboxId:34da138
8827880873125688a0ac800d701cef134bf76ff2b7101d97c3570ac69,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761591418609004347,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4246ed8c9b2f11e40ac4ed620904b3,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:4eba041d7c32a438a0e3146d823021913c8d115f0730a389f250907c87a6d45c,PodSandboxId:af045b669200a98e83828b7038b0ba1371f3f501d38a1aaf2a24eaffe8481851,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761591418585889029,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4738620b04d3027787daeded7d8de7c7,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0eb12ce7e2105c2d5af02d2296b784e8c1e6290e76a00061c712a7d7d680f8e,PodSandboxId:c5570e67c7a56294b118428e679ff8b66f3a3e9b719b89e2b9dfb87dfa3f95f5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761591418576290141,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de27a2c807a456567dcafd8f96dd732,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort
\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7c017a15-9e55-4c21-83d5-2ae717512cf0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:01:38 addons-864929 crio[816]: time="2025-10-27 19:01:38.450824019Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2df86b27-5730-4c22-af30-d2976b27476c name=/runtime.v1.RuntimeService/Version
	Oct 27 19:01:38 addons-864929 crio[816]: time="2025-10-27 19:01:38.450935147Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2df86b27-5730-4c22-af30-d2976b27476c name=/runtime.v1.RuntimeService/Version
	Oct 27 19:01:38 addons-864929 crio[816]: time="2025-10-27 19:01:38.452685853Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=79581ff4-2fb7-473f-b8a7-966dd0052bde name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:01:38 addons-864929 crio[816]: time="2025-10-27 19:01:38.454929210Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761591698454899555,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:552224,},InodesUsed:&UInt64Value{Value:191,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79581ff4-2fb7-473f-b8a7-966dd0052bde name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:01:38 addons-864929 crio[816]: time="2025-10-27 19:01:38.456350462Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cd21f44a-0c5b-426f-9a67-37951fd57b3e name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:01:38 addons-864929 crio[816]: time="2025-10-27 19:01:38.456428391Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cd21f44a-0c5b-426f-9a67-37951fd57b3e name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:01:38 addons-864929 crio[816]: time="2025-10-27 19:01:38.457072862Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:adaa598112f171df13e78cd56d399d5cea5583cbab9f70582c179853419f0a95,PodSandboxId:0e930ac960395a1fe60ce33b3d0d23e5074c5bcf2cfcf870738b45425fc094f3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761591555862014978,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e5f3a97-dcd1-44e6-920b-2953ee6ba066,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4aa82535ec10d187f1da703d58159fb09230f78d0581e0f49fbd4acd47482df,PodSandboxId:2e4a1f88f6c72c5d32f4d9fa16c7245440698c2e6c6940465c848ea8e3c1de72,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761591529033470607,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a668ad58-4082-4722-a352-3bd62c30df9b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a32d240f03f8f28e7d4e7a44d8c5ed0615b4f8a512dff263873f19db80541de,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1761591522582012711,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0067ca876ce6c8bdc5053fc40be27170f81485094511709910b16e143a9e2fc4,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1761591521027264059,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd7ab79c70b2b1fe050919ff1dc62a9bd2f43e52e74b896feb06973205b4c86,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1761591519397782799,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c
1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c291b0333c5db7a44ffbeef42ea3e322de328a2db3a212677c23a228d7be117,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1761591515004169043,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8806283adced3b462f1661f7b82158e81dfd3ee30abbda8de593e5214c7fbeb,PodSandboxId:d56d1c1bba09188f7a1825c4b896edc447234b5fff8d7d1dcbd81a9f40f9c5ff,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761591513353734196,Labels:map[string]string{io.kubernetes.container.name: co
ntroller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-k59b5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7b940d9c-daf7-43a0-965f-93d0278a1913,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ba505fec54a4152ff5b929051ca72258b3111a7e5ab73be1ad55ceec66f8fb66,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt
:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1761591505993152021,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b250e967b1910669d276d1a5519185d1aadfe512b72ec2b46eb44e2d08b2947c,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Meta
data:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1761591504435510194,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07110c7b3afc08381c95acb068cfe5fd71524933fa
f9a6815a6f33f2f28c14b5,PodSandboxId:d3fe0c8c9df1bf22576a6f62d4487ebe483778329f044aaf12442f36aefee1c9,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1761591502367296352,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2edb44-d6fd-41c7-aebc-45f7051be9b9,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f241dd9f7205d1dd0951
38c4d6b056bd582765d0b8e3d8bb89d772bfaae657ad,PodSandboxId:6429ac3aeaf4cf12d6b687f73be67ce1eb08e0da208ef1272ae3514a20ed0c84,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1761591500984769440,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 923a1bc3-3658-4c92-9ac7-f6fb7cc49fdc,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:1708c06c7e746239971631d28cc4118fcf7c6f5e0ff884e4193277f8d4fe1045,PodSandboxId:6813db443ad42f30ede5948e121225fd273a9767846892f5abfd1c7e67717754,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761591499124403764,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-9nfvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e133be4d-c9ac-45ee-8523-3197eb5ae1dd,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMes
sagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8f995b816e5743f53a114660fa4536960d4b413e08ae8c78b70a56be317652f,PodSandboxId:41f0fc88d88a67c53a3bd864e17466349b4c7cd2b1545a854eefac2ca9cec7a5,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761591498953436180,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-t78cg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07e1f13e-a7d4-496f-9f63-f96306459e61,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d0a056d958ba0e9df5b60dee569ae445476e826c493b74d7553383ff024320,PodSandboxId:1324ec30ac3236306c7c90094dbf02a0f3c0382beef5f98e4c3831943375aef5,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761591496535840408,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2ll76,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d59a4fd7-fa88-4810-be40-28b0fc1694d7,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f7b563f60d965e0e7630e1aee05c605209b846448c5c59f53c0a16a9f9d665d,PodSandboxId:bc410cbb40cb5902da6bdf34d8d5242293447cf15cac9039be7ba9684081f6aa,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1761591496414288177,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-xhp4p,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: aca554db-371c-4aad-9edb-8724e17ed917,},Annotations:map[string]string{io.kuber
netes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb2c43aca3bd88ad2b0df10f83323a46e0f850b0a9fa20cbddf8353f1fcdc4ab,PodSandboxId:3203c7b5e19abb3f2b904f6e61f87d3b2f43c4d0227398b77c6a1c87d83067df,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761591495239430024,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4qkgj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b597aeaa-6d3f-49c8-86a0-311d3f9d468f,},
Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11101a79fc0739766a9d1c4f24680be46354448f79c0965c8186d69396bd6de8,PodSandboxId:a34c89c3d97f4534833c02b1092cbe693acbc1d81b74de61458ce121608460c7,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761591486710960910,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-5bx7q,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: ef4b0394-4dee-4b23-bee8-0787117f056f,},A
nnotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:748185a242063201d993b2e869541776467517eaaa221985410ce6275695797c,PodSandboxId:d8de63c4ef02e976d3f83b9413c0bc5b2d60b40303b2a9552a9eb03cb001bba6,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761591478296820394,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.p
od.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c0967e-2aba-46db-9b8d-50afb9e508c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47c975912a90508c9994f2e3e844922ac61e9b8efd3e831e0addc0eeb3f78997,PodSandboxId:eb20897f30dfb36e8f3c34ea19074dcb418adfec59e5f0b0a7e7d7001d52924b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER
_RUNNING,CreatedAt:1761591461490696586,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-zg4tw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26b73888-1e70-456d-ab70-4392ce52af26,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9580ed2258f1ddc819f6b60b3c2ef2524bf0b58aa70e0aff2439347be11df4e9,PodSandboxId:d0de4be78d27d9e94647775771a19ac1751580111ae3739d05c71953c1faf14a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_R
UNNING,CreatedAt:1761591440550993928,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec5b960-2f51-438a-9968-46e1bea6ddc7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:378ab83eabeec92fc7bf1059eab8071d79c91a8ed0be14239fcda364f18c73e3,PodSandboxId:a87aa3850ab80908c43af3f2bbb9eca022489f0530ec2b8899475a9ac087e88d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761591
431971845716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-f8dfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ada2d5f-c124-4130-8e4d-f5f6f0d2b856,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c25a92cc96070b1a3ab5a630802cbe36b41664194b39b44b313e4f4f30c3e83c,PodSandboxId:1549458dc06ee22d63cae83ec65fb1b67f7fe3dd07b0cb03
5e9908c6a184cd2d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761591431223440952,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5grdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ab29d4-f3af-4942-87b0-5b146ec49fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:473b2a7d1d8d4e7a553f7f11a2d0384f3251123a5c4549760e65d8ec7b53033c,PodSandboxId:d582ed9677d49ebcc2ef56ec9d4db2cd633d5a4f0d9dbfb7d9840888bee96671,Metadata:&Contai
nerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761591418598524106,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853670e29e0053cd2968e4d42e8dcd57,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23a81c0c110d4803997f153b474b99fa2c8dd49df03bcd06e0deab806e84e522,PodSandboxId:34da138
8827880873125688a0ac800d701cef134bf76ff2b7101d97c3570ac69,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761591418609004347,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4246ed8c9b2f11e40ac4ed620904b3,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:4eba041d7c32a438a0e3146d823021913c8d115f0730a389f250907c87a6d45c,PodSandboxId:af045b669200a98e83828b7038b0ba1371f3f501d38a1aaf2a24eaffe8481851,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761591418585889029,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4738620b04d3027787daeded7d8de7c7,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0eb12ce7e2105c2d5af02d2296b784e8c1e6290e76a00061c712a7d7d680f8e,PodSandboxId:c5570e67c7a56294b118428e679ff8b66f3a3e9b719b89e2b9dfb87dfa3f95f5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761591418576290141,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de27a2c807a456567dcafd8f96dd732,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort
\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cd21f44a-0c5b-426f-9a67-37951fd57b3e name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:01:38 addons-864929 crio[816]: time="2025-10-27 19:01:38.498962665Z" level=debug msg="Request: &ExecSyncRequest{ContainerId:11101a79fc0739766a9d1c4f24680be46354448f79c0965c8186d69396bd6de8,Cmd:[/bin/gadgettracermanager -liveness],Timeout:2,}" file="otel-collector/interceptors.go:62" id=e34f7bb1-0efc-4c53-9e48-f00ee1cc3f76 name=/runtime.v1.RuntimeService/ExecSync
	Oct 27 19:01:38 addons-864929 crio[816]: time="2025-10-27 19:01:38.499769175Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bc3ac88f-191f-4bf1-8ea4-d70587ead4c1 name=/runtime.v1.RuntimeService/Version
	Oct 27 19:01:38 addons-864929 crio[816]: time="2025-10-27 19:01:38.499855147Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bc3ac88f-191f-4bf1-8ea4-d70587ead4c1 name=/runtime.v1.RuntimeService/Version
	Oct 27 19:01:38 addons-864929 crio[816]: time="2025-10-27 19:01:38.502797427Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1d5bdaf2-beb1-43b9-9cec-e4bc961f5090 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:01:38 addons-864929 crio[816]: time="2025-10-27 19:01:38.504016444Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761591698503987461,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:552224,},InodesUsed:&UInt64Value{Value:191,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1d5bdaf2-beb1-43b9-9cec-e4bc961f5090 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:01:38 addons-864929 crio[816]: time="2025-10-27 19:01:38.505035340Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8683f11b-5945-48bb-bf0d-abe92984a5c3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:01:38 addons-864929 crio[816]: time="2025-10-27 19:01:38.505247227Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8683f11b-5945-48bb-bf0d-abe92984a5c3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:01:38 addons-864929 crio[816]: time="2025-10-27 19:01:38.506876588Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:adaa598112f171df13e78cd56d399d5cea5583cbab9f70582c179853419f0a95,PodSandboxId:0e930ac960395a1fe60ce33b3d0d23e5074c5bcf2cfcf870738b45425fc094f3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761591555862014978,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e5f3a97-dcd1-44e6-920b-2953ee6ba066,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4aa82535ec10d187f1da703d58159fb09230f78d0581e0f49fbd4acd47482df,PodSandboxId:2e4a1f88f6c72c5d32f4d9fa16c7245440698c2e6c6940465c848ea8e3c1de72,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761591529033470607,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a668ad58-4082-4722-a352-3bd62c30df9b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a32d240f03f8f28e7d4e7a44d8c5ed0615b4f8a512dff263873f19db80541de,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1761591522582012711,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0067ca876ce6c8bdc5053fc40be27170f81485094511709910b16e143a9e2fc4,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1761591521027264059,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd7ab79c70b2b1fe050919ff1dc62a9bd2f43e52e74b896feb06973205b4c86,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1761591519397782799,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c
1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c291b0333c5db7a44ffbeef42ea3e322de328a2db3a212677c23a228d7be117,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1761591515004169043,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8806283adced3b462f1661f7b82158e81dfd3ee30abbda8de593e5214c7fbeb,PodSandboxId:d56d1c1bba09188f7a1825c4b896edc447234b5fff8d7d1dcbd81a9f40f9c5ff,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761591513353734196,Labels:map[string]string{io.kubernetes.container.name: co
ntroller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-k59b5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7b940d9c-daf7-43a0-965f-93d0278a1913,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ba505fec54a4152ff5b929051ca72258b3111a7e5ab73be1ad55ceec66f8fb66,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt
:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1761591505993152021,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b250e967b1910669d276d1a5519185d1aadfe512b72ec2b46eb44e2d08b2947c,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Meta
data:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1761591504435510194,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07110c7b3afc08381c95acb068cfe5fd71524933fa
f9a6815a6f33f2f28c14b5,PodSandboxId:d3fe0c8c9df1bf22576a6f62d4487ebe483778329f044aaf12442f36aefee1c9,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1761591502367296352,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2edb44-d6fd-41c7-aebc-45f7051be9b9,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f241dd9f7205d1dd0951
38c4d6b056bd582765d0b8e3d8bb89d772bfaae657ad,PodSandboxId:6429ac3aeaf4cf12d6b687f73be67ce1eb08e0da208ef1272ae3514a20ed0c84,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1761591500984769440,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 923a1bc3-3658-4c92-9ac7-f6fb7cc49fdc,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:1708c06c7e746239971631d28cc4118fcf7c6f5e0ff884e4193277f8d4fe1045,PodSandboxId:6813db443ad42f30ede5948e121225fd273a9767846892f5abfd1c7e67717754,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761591499124403764,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-9nfvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e133be4d-c9ac-45ee-8523-3197eb5ae1dd,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMes
sagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8f995b816e5743f53a114660fa4536960d4b413e08ae8c78b70a56be317652f,PodSandboxId:41f0fc88d88a67c53a3bd864e17466349b4c7cd2b1545a854eefac2ca9cec7a5,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761591498953436180,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-t78cg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07e1f13e-a7d4-496f-9f63-f96306459e61,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d0a056d958ba0e9df5b60dee569ae445476e826c493b74d7553383ff024320,PodSandboxId:1324ec30ac3236306c7c90094dbf02a0f3c0382beef5f98e4c3831943375aef5,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761591496535840408,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2ll76,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d59a4fd7-fa88-4810-be40-28b0fc1694d7,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f7b563f60d965e0e7630e1aee05c605209b846448c5c59f53c0a16a9f9d665d,PodSandboxId:bc410cbb40cb5902da6bdf34d8d5242293447cf15cac9039be7ba9684081f6aa,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1761591496414288177,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-xhp4p,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: aca554db-371c-4aad-9edb-8724e17ed917,},Annotations:map[string]string{io.kuber
netes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb2c43aca3bd88ad2b0df10f83323a46e0f850b0a9fa20cbddf8353f1fcdc4ab,PodSandboxId:3203c7b5e19abb3f2b904f6e61f87d3b2f43c4d0227398b77c6a1c87d83067df,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761591495239430024,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4qkgj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b597aeaa-6d3f-49c8-86a0-311d3f9d468f,},
Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11101a79fc0739766a9d1c4f24680be46354448f79c0965c8186d69396bd6de8,PodSandboxId:a34c89c3d97f4534833c02b1092cbe693acbc1d81b74de61458ce121608460c7,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761591486710960910,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-5bx7q,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: ef4b0394-4dee-4b23-bee8-0787117f056f,},A
nnotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:748185a242063201d993b2e869541776467517eaaa221985410ce6275695797c,PodSandboxId:d8de63c4ef02e976d3f83b9413c0bc5b2d60b40303b2a9552a9eb03cb001bba6,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761591478296820394,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.p
od.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c0967e-2aba-46db-9b8d-50afb9e508c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47c975912a90508c9994f2e3e844922ac61e9b8efd3e831e0addc0eeb3f78997,PodSandboxId:eb20897f30dfb36e8f3c34ea19074dcb418adfec59e5f0b0a7e7d7001d52924b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER
_RUNNING,CreatedAt:1761591461490696586,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-zg4tw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26b73888-1e70-456d-ab70-4392ce52af26,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9580ed2258f1ddc819f6b60b3c2ef2524bf0b58aa70e0aff2439347be11df4e9,PodSandboxId:d0de4be78d27d9e94647775771a19ac1751580111ae3739d05c71953c1faf14a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_R
UNNING,CreatedAt:1761591440550993928,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec5b960-2f51-438a-9968-46e1bea6ddc7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:378ab83eabeec92fc7bf1059eab8071d79c91a8ed0be14239fcda364f18c73e3,PodSandboxId:a87aa3850ab80908c43af3f2bbb9eca022489f0530ec2b8899475a9ac087e88d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761591
431971845716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-f8dfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ada2d5f-c124-4130-8e4d-f5f6f0d2b856,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c25a92cc96070b1a3ab5a630802cbe36b41664194b39b44b313e4f4f30c3e83c,PodSandboxId:1549458dc06ee22d63cae83ec65fb1b67f7fe3dd07b0cb03
5e9908c6a184cd2d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761591431223440952,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5grdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ab29d4-f3af-4942-87b0-5b146ec49fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:473b2a7d1d8d4e7a553f7f11a2d0384f3251123a5c4549760e65d8ec7b53033c,PodSandboxId:d582ed9677d49ebcc2ef56ec9d4db2cd633d5a4f0d9dbfb7d9840888bee96671,Metadata:&Contai
nerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761591418598524106,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853670e29e0053cd2968e4d42e8dcd57,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23a81c0c110d4803997f153b474b99fa2c8dd49df03bcd06e0deab806e84e522,PodSandboxId:34da138
8827880873125688a0ac800d701cef134bf76ff2b7101d97c3570ac69,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761591418609004347,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4246ed8c9b2f11e40ac4ed620904b3,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:4eba041d7c32a438a0e3146d823021913c8d115f0730a389f250907c87a6d45c,PodSandboxId:af045b669200a98e83828b7038b0ba1371f3f501d38a1aaf2a24eaffe8481851,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761591418585889029,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4738620b04d3027787daeded7d8de7c7,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0eb12ce7e2105c2d5af02d2296b784e8c1e6290e76a00061c712a7d7d680f8e,PodSandboxId:c5570e67c7a56294b118428e679ff8b66f3a3e9b719b89e2b9dfb87dfa3f95f5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761591418576290141,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de27a2c807a456567dcafd8f96dd732,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort
\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8683f11b-5945-48bb-bf0d-abe92984a5c3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:01:38 addons-864929 crio[816]: time="2025-10-27 19:01:38.553504679Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0ecfdf9c-ccfe-4031-bf30-c7193a008876 name=/runtime.v1.RuntimeService/Version
	Oct 27 19:01:38 addons-864929 crio[816]: time="2025-10-27 19:01:38.553590801Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0ecfdf9c-ccfe-4031-bf30-c7193a008876 name=/runtime.v1.RuntimeService/Version
	Oct 27 19:01:38 addons-864929 crio[816]: time="2025-10-27 19:01:38.562466009Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=10a7b439-8e5a-4042-b653-96e7f8199946 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:01:38 addons-864929 crio[816]: time="2025-10-27 19:01:38.564490352Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761591698564457173,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:552224,},InodesUsed:&UInt64Value{Value:191,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=10a7b439-8e5a-4042-b653-96e7f8199946 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:01:38 addons-864929 crio[816]: time="2025-10-27 19:01:38.567704936Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b880ee34-2e19-48e0-b6b6-e502626f3a95 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:01:38 addons-864929 crio[816]: time="2025-10-27 19:01:38.567954180Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b880ee34-2e19-48e0-b6b6-e502626f3a95 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:01:38 addons-864929 crio[816]: time="2025-10-27 19:01:38.568941886Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:adaa598112f171df13e78cd56d399d5cea5583cbab9f70582c179853419f0a95,PodSandboxId:0e930ac960395a1fe60ce33b3d0d23e5074c5bcf2cfcf870738b45425fc094f3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761591555862014978,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e5f3a97-dcd1-44e6-920b-2953ee6ba066,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4aa82535ec10d187f1da703d58159fb09230f78d0581e0f49fbd4acd47482df,PodSandboxId:2e4a1f88f6c72c5d32f4d9fa16c7245440698c2e6c6940465c848ea8e3c1de72,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761591529033470607,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a668ad58-4082-4722-a352-3bd62c30df9b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a32d240f03f8f28e7d4e7a44d8c5ed0615b4f8a512dff263873f19db80541de,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1761591522582012711,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0067ca876ce6c8bdc5053fc40be27170f81485094511709910b16e143a9e2fc4,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1761591521027264059,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd7ab79c70b2b1fe050919ff1dc62a9bd2f43e52e74b896feb06973205b4c86,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1761591519397782799,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c
1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c291b0333c5db7a44ffbeef42ea3e322de328a2db3a212677c23a228d7be117,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1761591515004169043,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8806283adced3b462f1661f7b82158e81dfd3ee30abbda8de593e5214c7fbeb,PodSandboxId:d56d1c1bba09188f7a1825c4b896edc447234b5fff8d7d1dcbd81a9f40f9c5ff,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761591513353734196,Labels:map[string]string{io.kubernetes.container.name: co
ntroller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-k59b5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7b940d9c-daf7-43a0-965f-93d0278a1913,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ba505fec54a4152ff5b929051ca72258b3111a7e5ab73be1ad55ceec66f8fb66,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt
:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1761591505993152021,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b250e967b1910669d276d1a5519185d1aadfe512b72ec2b46eb44e2d08b2947c,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Meta
data:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1761591504435510194,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07110c7b3afc08381c95acb068cfe5fd71524933fa
f9a6815a6f33f2f28c14b5,PodSandboxId:d3fe0c8c9df1bf22576a6f62d4487ebe483778329f044aaf12442f36aefee1c9,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1761591502367296352,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2edb44-d6fd-41c7-aebc-45f7051be9b9,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f241dd9f7205d1dd0951
38c4d6b056bd582765d0b8e3d8bb89d772bfaae657ad,PodSandboxId:6429ac3aeaf4cf12d6b687f73be67ce1eb08e0da208ef1272ae3514a20ed0c84,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1761591500984769440,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 923a1bc3-3658-4c92-9ac7-f6fb7cc49fdc,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:1708c06c7e746239971631d28cc4118fcf7c6f5e0ff884e4193277f8d4fe1045,PodSandboxId:6813db443ad42f30ede5948e121225fd273a9767846892f5abfd1c7e67717754,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761591499124403764,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-9nfvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e133be4d-c9ac-45ee-8523-3197eb5ae1dd,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMes
sagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8f995b816e5743f53a114660fa4536960d4b413e08ae8c78b70a56be317652f,PodSandboxId:41f0fc88d88a67c53a3bd864e17466349b4c7cd2b1545a854eefac2ca9cec7a5,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761591498953436180,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-t78cg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07e1f13e-a7d4-496f-9f63-f96306459e61,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d0a056d958ba0e9df5b60dee569ae445476e826c493b74d7553383ff024320,PodSandboxId:1324ec30ac3236306c7c90094dbf02a0f3c0382beef5f98e4c3831943375aef5,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761591496535840408,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2ll76,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d59a4fd7-fa88-4810-be40-28b0fc1694d7,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f7b563f60d965e0e7630e1aee05c605209b846448c5c59f53c0a16a9f9d665d,PodSandboxId:bc410cbb40cb5902da6bdf34d8d5242293447cf15cac9039be7ba9684081f6aa,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1761591496414288177,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-xhp4p,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: aca554db-371c-4aad-9edb-8724e17ed917,},Annotations:map[string]string{io.kuber
netes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb2c43aca3bd88ad2b0df10f83323a46e0f850b0a9fa20cbddf8353f1fcdc4ab,PodSandboxId:3203c7b5e19abb3f2b904f6e61f87d3b2f43c4d0227398b77c6a1c87d83067df,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761591495239430024,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4qkgj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b597aeaa-6d3f-49c8-86a0-311d3f9d468f,},
Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11101a79fc0739766a9d1c4f24680be46354448f79c0965c8186d69396bd6de8,PodSandboxId:a34c89c3d97f4534833c02b1092cbe693acbc1d81b74de61458ce121608460c7,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761591486710960910,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-5bx7q,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: ef4b0394-4dee-4b23-bee8-0787117f056f,},A
nnotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:748185a242063201d993b2e869541776467517eaaa221985410ce6275695797c,PodSandboxId:d8de63c4ef02e976d3f83b9413c0bc5b2d60b40303b2a9552a9eb03cb001bba6,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761591478296820394,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.p
od.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c0967e-2aba-46db-9b8d-50afb9e508c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47c975912a90508c9994f2e3e844922ac61e9b8efd3e831e0addc0eeb3f78997,PodSandboxId:eb20897f30dfb36e8f3c34ea19074dcb418adfec59e5f0b0a7e7d7001d52924b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER
_RUNNING,CreatedAt:1761591461490696586,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-zg4tw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26b73888-1e70-456d-ab70-4392ce52af26,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9580ed2258f1ddc819f6b60b3c2ef2524bf0b58aa70e0aff2439347be11df4e9,PodSandboxId:d0de4be78d27d9e94647775771a19ac1751580111ae3739d05c71953c1faf14a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_R
UNNING,CreatedAt:1761591440550993928,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec5b960-2f51-438a-9968-46e1bea6ddc7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:378ab83eabeec92fc7bf1059eab8071d79c91a8ed0be14239fcda364f18c73e3,PodSandboxId:a87aa3850ab80908c43af3f2bbb9eca022489f0530ec2b8899475a9ac087e88d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761591
431971845716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-f8dfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ada2d5f-c124-4130-8e4d-f5f6f0d2b856,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c25a92cc96070b1a3ab5a630802cbe36b41664194b39b44b313e4f4f30c3e83c,PodSandboxId:1549458dc06ee22d63cae83ec65fb1b67f7fe3dd07b0cb03
5e9908c6a184cd2d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761591431223440952,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5grdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ab29d4-f3af-4942-87b0-5b146ec49fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:473b2a7d1d8d4e7a553f7f11a2d0384f3251123a5c4549760e65d8ec7b53033c,PodSandboxId:d582ed9677d49ebcc2ef56ec9d4db2cd633d5a4f0d9dbfb7d9840888bee96671,Metadata:&Contai
nerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761591418598524106,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853670e29e0053cd2968e4d42e8dcd57,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23a81c0c110d4803997f153b474b99fa2c8dd49df03bcd06e0deab806e84e522,PodSandboxId:34da138
8827880873125688a0ac800d701cef134bf76ff2b7101d97c3570ac69,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761591418609004347,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4246ed8c9b2f11e40ac4ed620904b3,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:4eba041d7c32a438a0e3146d823021913c8d115f0730a389f250907c87a6d45c,PodSandboxId:af045b669200a98e83828b7038b0ba1371f3f501d38a1aaf2a24eaffe8481851,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761591418585889029,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4738620b04d3027787daeded7d8de7c7,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0eb12ce7e2105c2d5af02d2296b784e8c1e6290e76a00061c712a7d7d680f8e,PodSandboxId:c5570e67c7a56294b118428e679ff8b66f3a3e9b719b89e2b9dfb87dfa3f95f5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761591418576290141,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de27a2c807a456567dcafd8f96dd732,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort
\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b880ee34-2e19-48e0-b6b6-e502626f3a95 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	adaa598112f17       docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22                                              2 minutes ago       Running             nginx                                    0                   0e930ac960395       nginx
	c4aa82535ec10       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          2 minutes ago       Running             busybox                                  0                   2e4a1f88f6c72       busybox
	9a32d240f03f8       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          2 minutes ago       Running             csi-snapshotter                          0                   3624eec81ad83       csi-hostpathplugin-2kk6q
	0067ca876ce6c       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago       Running             csi-provisioner                          0                   3624eec81ad83       csi-hostpathplugin-2kk6q
	7bd7ab79c70b2       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago       Running             liveness-probe                           0                   3624eec81ad83       csi-hostpathplugin-2kk6q
	9c291b0333c5d       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago       Running             hostpath                                 0                   3624eec81ad83       csi-hostpathplugin-2kk6q
	b8806283adced       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd                             3 minutes ago       Running             controller                               0                   d56d1c1bba091       ingress-nginx-controller-675c5ddd98-k59b5
	ba505fec54a41       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago       Running             node-driver-registrar                    0                   3624eec81ad83       csi-hostpathplugin-2kk6q
	b250e967b1910       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago       Running             csi-external-health-monitor-controller   0                   3624eec81ad83       csi-hostpathplugin-2kk6q
	07110c7b3afc0       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago       Running             csi-resizer                              0                   d3fe0c8c9df1b       csi-hostpath-resizer-0
	f241dd9f7205d       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago       Running             csi-attacher                             0                   6429ac3aeaf4c       csi-hostpath-attacher-0
	1708c06c7e746       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago       Running             volume-snapshot-controller               0                   6813db443ad42       snapshot-controller-7d9fbc56b8-9nfvf
	a8f995b816e57       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago       Running             volume-snapshot-controller               0                   41f0fc88d88a6       snapshot-controller-7d9fbc56b8-t78cg
	d3d0a056d958b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39                   3 minutes ago       Exited              patch                                    0                   1324ec30ac323       ingress-nginx-admission-patch-2ll76
	6f7b563f60d96       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago       Running             local-path-provisioner                   0                   bc410cbb40cb5       local-path-provisioner-648f6765c9-xhp4p
	fb2c43aca3bd8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39                   3 minutes ago       Exited              create                                   0                   3203c7b5e19ab       ingress-nginx-admission-create-4qkgj
	11101a79fc073       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            3 minutes ago       Running             gadget                                   0                   a34c89c3d97f4       gadget-5bx7q
	748185a242063       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago       Running             minikube-ingress-dns                     0                   d8de63c4ef02e       kube-ingress-dns-minikube
	47c975912a905       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago       Running             amd-gpu-device-plugin                    0                   eb20897f30dfb       amd-gpu-device-plugin-zg4tw
	9580ed2258f1d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             4 minutes ago       Running             storage-provisioner                      0                   d0de4be78d27d       storage-provisioner
	378ab83eabeec       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             4 minutes ago       Running             coredns                                  0                   a87aa3850ab80       coredns-66bc5c9577-f8dfl
	c25a92cc96070       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago       Running             kube-proxy                               0                   1549458dc06ee       kube-proxy-5grdt
	23a81c0c110d4       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago       Running             kube-scheduler                           0                   34da138882788       kube-scheduler-addons-864929
	473b2a7d1d8d4       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago       Running             etcd                                     0                   d582ed9677d49       etcd-addons-864929
	4eba041d7c32a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago       Running             kube-controller-manager                  0                   af045b669200a       kube-controller-manager-addons-864929
	a0eb12ce7e210       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago       Running             kube-apiserver                           0                   c5570e67c7a56       kube-apiserver-addons-864929
	
	
	==> coredns [378ab83eabeec92fc7bf1059eab8071d79c91a8ed0be14239fcda364f18c73e3] <==
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	[INFO] Reloading complete
	[INFO] 10.244.0.23:38159 - 60296 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000932187s
	[INFO] 10.244.0.23:39996 - 50396 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001453501s
	[INFO] 10.244.0.23:41528 - 60042 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000166979s
	[INFO] 10.244.0.23:52229 - 1907 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000298972s
	[INFO] 10.244.0.23:57532 - 21461 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000121257s
	[INFO] 10.244.0.23:38141 - 12267 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000164298s
	[INFO] 10.244.0.23:36015 - 45520 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.005738249s
	[INFO] 10.244.0.23:44733 - 20303 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.006362815s
	[INFO] 10.244.0.27:36270 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.001630793s
	[INFO] 10.244.0.27:59264 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000123984s
	
	
	==> describe nodes <==
	Name:               addons-864929
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-864929
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=addons-864929
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T18_57_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-864929
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-864929"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 18:57:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-864929
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 19:01:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 18:59:40 +0000   Mon, 27 Oct 2025 18:56:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 18:59:40 +0000   Mon, 27 Oct 2025 18:56:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 18:59:40 +0000   Mon, 27 Oct 2025 18:56:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 18:59:40 +0000   Mon, 27 Oct 2025 18:57:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.216
	  Hostname:    addons-864929
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 780db33d391d49adb77a2a509bc06274
	  System UUID:                780db33d-391d-49ad-b77a-2a509bc06274
	  Boot ID:                    6fa66b3e-a553-40c9-b7f0-71dd11966be5
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (22 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m52s
	  default                     hello-world-app-5d498dc89-wmhrh              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  default                     task-pv-pod                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  default                     test-local-path                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  gadget                      gadget-5bx7q                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-k59b5    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m19s
	  kube-system                 amd-gpu-device-plugin-zg4tw                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 coredns-66bc5c9577-f8dfl                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m28s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 csi-hostpathplugin-2kk6q                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 etcd-addons-864929                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m33s
	  kube-system                 kube-apiserver-addons-864929                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 kube-controller-manager-addons-864929        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m33s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 kube-proxy-5grdt                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 kube-scheduler-addons-864929                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m33s
	  kube-system                 snapshot-controller-7d9fbc56b8-9nfvf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 snapshot-controller-7d9fbc56b8-t78cg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	  local-path-storage          local-path-provisioner-648f6765c9-xhp4p      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m26s  kube-proxy       
	  Normal  Starting                 4m33s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m33s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m33s  kubelet          Node addons-864929 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m33s  kubelet          Node addons-864929 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m33s  kubelet          Node addons-864929 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m32s  kubelet          Node addons-864929 status is now: NodeReady
	  Normal  RegisteredNode           4m29s  node-controller  Node addons-864929 event: Registered Node addons-864929 in Controller
	
	
	==> dmesg <==
	[Oct27 18:57] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.000037] kauditd_printk_skb: 18 callbacks suppressed
	[  +1.035983] kauditd_printk_skb: 321 callbacks suppressed
	[  +0.074749] kauditd_printk_skb: 215 callbacks suppressed
	[  +0.252144] kauditd_printk_skb: 390 callbacks suppressed
	[ +13.923984] kauditd_printk_skb: 64 callbacks suppressed
	[  +7.170668] kauditd_printk_skb: 20 callbacks suppressed
	[  +6.426658] kauditd_printk_skb: 32 callbacks suppressed
	[Oct27 18:58] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.493718] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.181992] kauditd_printk_skb: 32 callbacks suppressed
	[  +1.064652] kauditd_printk_skb: 94 callbacks suppressed
	[  +0.000029] kauditd_printk_skb: 37 callbacks suppressed
	[  +0.654510] kauditd_printk_skb: 156 callbacks suppressed
	[  +5.691951] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.014421] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.186188] kauditd_printk_skb: 26 callbacks suppressed
	[ +13.043727] kauditd_printk_skb: 47 callbacks suppressed
	[Oct27 18:59] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.809040] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.269736] kauditd_printk_skb: 141 callbacks suppressed
	[  +0.027386] kauditd_printk_skb: 85 callbacks suppressed
	[  +5.740720] kauditd_printk_skb: 139 callbacks suppressed
	[ +11.255527] kauditd_printk_skb: 58 callbacks suppressed
	[Oct27 19:01] kauditd_printk_skb: 22 callbacks suppressed
	
	
	==> etcd [473b2a7d1d8d4e7a553f7f11a2d0384f3251123a5c4549760e65d8ec7b53033c] <==
	{"level":"info","ts":"2025-10-27T18:57:56.410315Z","caller":"traceutil/trace.go:172","msg":"trace[229503704] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:948; }","duration":"134.367662ms","start":"2025-10-27T18:57:56.275938Z","end":"2025-10-27T18:57:56.410305Z","steps":["trace[229503704] 'agreement among raft nodes before linearized reading'  (duration: 132.957173ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:57:56.410060Z","caller":"traceutil/trace.go:172","msg":"trace[891786162] linearizableReadLoop","detail":"{readStateIndex:975; appliedIndex:975; }","duration":"131.983723ms","start":"2025-10-27T18:57:56.275942Z","end":"2025-10-27T18:57:56.407926Z","steps":["trace[891786162] 'read index received'  (duration: 131.979399ms)","trace[891786162] 'applied index is now lower than readState.Index'  (duration: 3.544µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T18:57:56.412263Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.94226ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T18:57:56.412309Z","caller":"traceutil/trace.go:172","msg":"trace[262639361] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:948; }","duration":"119.995829ms","start":"2025-10-27T18:57:56.292305Z","end":"2025-10-27T18:57:56.412301Z","steps":["trace[262639361] 'agreement among raft nodes before linearized reading'  (duration: 119.922856ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:58:11.125078Z","caller":"traceutil/trace.go:172","msg":"trace[493772090] linearizableReadLoop","detail":"{readStateIndex:1016; appliedIndex:1016; }","duration":"108.067998ms","start":"2025-10-27T18:58:11.016880Z","end":"2025-10-27T18:58:11.124948Z","steps":["trace[493772090] 'read index received'  (duration: 108.0588ms)","trace[493772090] 'applied index is now lower than readState.Index'  (duration: 7.728µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T18:58:11.125323Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.422079ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" limit:1 ","response":"range_response_count:1 size:3021"}
	{"level":"info","ts":"2025-10-27T18:58:11.125351Z","caller":"traceutil/trace.go:172","msg":"trace[2111825061] range","detail":"{range_begin:/registry/configmaps/kube-system/extension-apiserver-authentication; range_end:; response_count:1; response_revision:984; }","duration":"108.467942ms","start":"2025-10-27T18:58:11.016877Z","end":"2025-10-27T18:58:11.125345Z","steps":["trace[2111825061] 'agreement among raft nodes before linearized reading'  (duration: 108.282493ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:58:11.125763Z","caller":"traceutil/trace.go:172","msg":"trace[1553925984] transaction","detail":"{read_only:false; response_revision:985; number_of_response:1; }","duration":"186.294868ms","start":"2025-10-27T18:58:10.939461Z","end":"2025-10-27T18:58:11.125756Z","steps":["trace[1553925984] 'process raft request'  (duration: 186.212532ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:58:11.142309Z","caller":"traceutil/trace.go:172","msg":"trace[839786025] transaction","detail":"{read_only:false; response_revision:986; number_of_response:1; }","duration":"138.309645ms","start":"2025-10-27T18:58:11.003986Z","end":"2025-10-27T18:58:11.142296Z","steps":["trace[839786025] 'process raft request'  (duration: 138.098647ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:58:13.531058Z","caller":"traceutil/trace.go:172","msg":"trace[30205562] linearizableReadLoop","detail":"{readStateIndex:1025; appliedIndex:1025; }","duration":"254.599969ms","start":"2025-10-27T18:58:13.276437Z","end":"2025-10-27T18:58:13.531037Z","steps":["trace[30205562] 'read index received'  (duration: 254.54701ms)","trace[30205562] 'applied index is now lower than readState.Index'  (duration: 3.554µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T18:58:13.531448Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"255.007373ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T18:58:13.531551Z","caller":"traceutil/trace.go:172","msg":"trace[1564347891] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:993; }","duration":"255.121817ms","start":"2025-10-27T18:58:13.276412Z","end":"2025-10-27T18:58:13.531534Z","steps":["trace[1564347891] 'agreement among raft nodes before linearized reading'  (duration: 254.972595ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:58:13.531509Z","caller":"traceutil/trace.go:172","msg":"trace[1892686575] transaction","detail":"{read_only:false; response_revision:994; number_of_response:1; }","duration":"391.579159ms","start":"2025-10-27T18:58:13.139914Z","end":"2025-10-27T18:58:13.531493Z","steps":["trace[1892686575] 'process raft request'  (duration: 391.354515ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T18:58:13.531824Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-27T18:58:13.139894Z","time spent":"391.808403ms","remote":"127.0.0.1:52894","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:985 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-10-27T18:58:13.532035Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.107659ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T18:58:13.532079Z","caller":"traceutil/trace.go:172","msg":"trace[2038100128] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:994; }","duration":"126.149586ms","start":"2025-10-27T18:58:13.405923Z","end":"2025-10-27T18:58:13.532072Z","steps":["trace[2038100128] 'agreement among raft nodes before linearized reading'  (duration: 126.101237ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T18:58:13.531900Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"238.553844ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T18:58:13.532339Z","caller":"traceutil/trace.go:172","msg":"trace[854445731] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:994; }","duration":"239.005326ms","start":"2025-10-27T18:58:13.293326Z","end":"2025-10-27T18:58:13.532332Z","steps":["trace[854445731] 'agreement among raft nodes before linearized reading'  (duration: 238.54211ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T18:58:34.711927Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.634065ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T18:58:34.712061Z","caller":"traceutil/trace.go:172","msg":"trace[895249490] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1119; }","duration":"113.796373ms","start":"2025-10-27T18:58:34.598253Z","end":"2025-10-27T18:58:34.712049Z","steps":["trace[895249490] 'range keys from in-memory index tree'  (duration: 113.587415ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T18:58:38.243222Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"193.660222ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T18:58:38.243481Z","caller":"traceutil/trace.go:172","msg":"trace[698536657] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses; range_end:; response_count:0; response_revision:1136; }","duration":"193.931351ms","start":"2025-10-27T18:58:38.049536Z","end":"2025-10-27T18:58:38.243467Z","steps":["trace[698536657] 'range keys from in-memory index tree'  (duration: 193.593238ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:58:42.492190Z","caller":"traceutil/trace.go:172","msg":"trace[1973944569] transaction","detail":"{read_only:false; response_revision:1165; number_of_response:1; }","duration":"119.999969ms","start":"2025-10-27T18:58:42.372178Z","end":"2025-10-27T18:58:42.492178Z","steps":["trace[1973944569] 'process raft request'  (duration: 119.899102ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:59:11.698647Z","caller":"traceutil/trace.go:172","msg":"trace[361898695] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1348; }","duration":"135.106526ms","start":"2025-10-27T18:59:11.563481Z","end":"2025-10-27T18:59:11.698587Z","steps":["trace[361898695] 'process raft request'  (duration: 135.018245ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:59:14.141155Z","caller":"traceutil/trace.go:172","msg":"trace[837123529] transaction","detail":"{read_only:false; response_revision:1379; number_of_response:1; }","duration":"206.995462ms","start":"2025-10-27T18:59:13.934147Z","end":"2025-10-27T18:59:14.141142Z","steps":["trace[837123529] 'process raft request'  (duration: 206.907826ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:01:39 up 5 min,  0 users,  load average: 0.77, 1.62, 0.85
	Linux addons-864929 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [a0eb12ce7e2105c2d5af02d2296b784e8c1e6290e76a00061c712a7d7d680f8e] <==
	W1027 18:57:22.323729       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1027 18:57:22.344896       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1027 18:57:23.882919       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.107.65.231"}
	W1027 18:57:39.184847       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1027 18:57:39.206284       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1027 18:57:39.243149       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1027 18:57:39.253377       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1027 18:58:11.250340       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.9.84:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.9.84:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.9.84:443: connect: connection refused" logger="UnhandledError"
	W1027 18:58:11.250699       1 handler_proxy.go:99] no RequestInfo found in the context
	E1027 18:58:11.250761       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1027 18:58:11.256876       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.9.84:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.9.84:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.9.84:443: connect: connection refused" logger="UnhandledError"
	E1027 18:58:11.257462       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.9.84:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.9.84:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.9.84:443: connect: connection refused" logger="UnhandledError"
	E1027 18:58:11.269028       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.9.84:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.9.84:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.9.84:443: connect: connection refused" logger="UnhandledError"
	E1027 18:58:11.311326       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.9.84:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.9.84:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.9.84:443: connect: connection refused" logger="UnhandledError"
	I1027 18:58:11.522386       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1027 18:58:56.253891       1 conn.go:339] Error on socket receive: read tcp 192.168.39.216:8443->192.168.39.1:59114: use of closed network connection
	E1027 18:58:56.463232       1 conn.go:339] Error on socket receive: read tcp 192.168.39.216:8443->192.168.39.1:59134: use of closed network connection
	I1027 18:59:05.726497       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.152.62"}
	I1027 18:59:12.082722       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1027 18:59:12.280737       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1027 18:59:12.320355       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.254.157"}
	I1027 19:01:37.350902       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.103.64"}
	
	
	==> kube-controller-manager [4eba041d7c32a438a0e3146d823021913c8d115f0730a389f250907c87a6d45c] <==
	I1027 18:57:09.194795       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 18:57:09.197029       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 18:57:09.197199       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 18:57:09.198769       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1027 18:57:09.199421       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1027 18:57:09.199859       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 18:57:09.202220       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 18:57:09.202262       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 18:57:09.204812       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-864929" podCIDRs=["10.244.0.0/24"]
	I1027 18:57:09.205058       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 18:57:09.205335       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 18:57:09.209450       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	E1027 18:57:17.574464       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1027 18:57:39.171065       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1027 18:57:39.171215       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1027 18:57:39.171282       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1027 18:57:39.201480       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1027 18:57:39.217694       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1027 18:57:39.272250       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 18:57:39.320974       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1027 18:58:09.292080       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1027 18:58:09.340459       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1027 18:59:09.765540       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I1027 18:59:29.759701       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I1027 18:59:41.557310       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	
	
	==> kube-proxy [c25a92cc96070b1a3ab5a630802cbe36b41664194b39b44b313e4f4f30c3e83c] <==
	I1027 18:57:11.964888       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 18:57:12.066455       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 18:57:12.066978       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.216"]
	E1027 18:57:12.067747       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 18:57:12.441037       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1027 18:57:12.441091       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1027 18:57:12.441116       1 server_linux.go:132] "Using iptables Proxier"
	I1027 18:57:12.549755       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 18:57:12.551449       1 server.go:527] "Version info" version="v1.34.1"
	I1027 18:57:12.551483       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 18:57:12.643682       1 config.go:200] "Starting service config controller"
	I1027 18:57:12.643795       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 18:57:12.644779       1 config.go:106] "Starting endpoint slice config controller"
	I1027 18:57:12.644795       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 18:57:12.644821       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 18:57:12.644825       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 18:57:12.652942       1 config.go:309] "Starting node config controller"
	I1027 18:57:12.654707       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 18:57:12.654716       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 18:57:12.746008       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 18:57:12.746581       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 18:57:12.760983       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [23a81c0c110d4803997f153b474b99fa2c8dd49df03bcd06e0deab806e84e522] <==
	E1027 18:57:02.235831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 18:57:02.235898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 18:57:02.236336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 18:57:02.236405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 18:57:02.236138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 18:57:02.236633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 18:57:02.236754       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 18:57:02.237054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 18:57:02.237146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 18:57:02.237161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 18:57:03.169999       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 18:57:03.170507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 18:57:03.173314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 18:57:03.241827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 18:57:03.244384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 18:57:03.277509       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 18:57:03.311109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 18:57:03.348245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 18:57:03.360178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1027 18:57:03.360672       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 18:57:03.390147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 18:57:03.532742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 18:57:03.622727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 18:57:03.635923       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1027 18:57:06.218759       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 19:00:26 addons-864929 kubelet[1502]: E1027 19:00:26.138431    1502 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(504b682e-4d7e-4f98-913e-efaa9ccfd4a1): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 27 19:00:26 addons-864929 kubelet[1502]: E1027 19:00:26.139809    1502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="504b682e-4d7e-4f98-913e-efaa9ccfd4a1"
	Oct 27 19:00:26 addons-864929 kubelet[1502]: E1027 19:00:26.718790    1502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="504b682e-4d7e-4f98-913e-efaa9ccfd4a1"
	Oct 27 19:00:29 addons-864929 kubelet[1502]: I1027 19:00:29.305086    1502 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-zg4tw" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 19:00:35 addons-864929 kubelet[1502]: E1027 19:00:35.723143    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761591635722662822  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:552224}  inodes_used:{value:191}}"
	Oct 27 19:00:35 addons-864929 kubelet[1502]: E1027 19:00:35.723177    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761591635722662822  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:552224}  inodes_used:{value:191}}"
	Oct 27 19:00:45 addons-864929 kubelet[1502]: E1027 19:00:45.726509    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761591645725876370  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:552224}  inodes_used:{value:191}}"
	Oct 27 19:00:45 addons-864929 kubelet[1502]: E1027 19:00:45.726660    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761591645725876370  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:552224}  inodes_used:{value:191}}"
	Oct 27 19:00:55 addons-864929 kubelet[1502]: E1027 19:00:55.730076    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761591655729269589  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:552224}  inodes_used:{value:191}}"
	Oct 27 19:00:55 addons-864929 kubelet[1502]: E1027 19:00:55.730176    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761591655729269589  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:552224}  inodes_used:{value:191}}"
	Oct 27 19:01:05 addons-864929 kubelet[1502]: E1027 19:01:05.733682    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761591665733223233  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:552224}  inodes_used:{value:191}}"
	Oct 27 19:01:05 addons-864929 kubelet[1502]: E1027 19:01:05.733713    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761591665733223233  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:552224}  inodes_used:{value:191}}"
	Oct 27 19:01:15 addons-864929 kubelet[1502]: E1027 19:01:15.737970    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761591675737305712  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:552224}  inodes_used:{value:191}}"
	Oct 27 19:01:15 addons-864929 kubelet[1502]: E1027 19:01:15.737996    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761591675737305712  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:552224}  inodes_used:{value:191}}"
	Oct 27 19:01:25 addons-864929 kubelet[1502]: E1027 19:01:25.742436    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761591685742000862  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:552224}  inodes_used:{value:191}}"
	Oct 27 19:01:25 addons-864929 kubelet[1502]: E1027 19:01:25.742509    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761591685742000862  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:552224}  inodes_used:{value:191}}"
	Oct 27 19:01:26 addons-864929 kubelet[1502]: E1027 19:01:26.335262    1502 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from image index: reading manifest sha256:4a35a7836fe08f340a42e25c4ac5eef4439585bbbb817b7bd28b2cd87c742642 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Oct 27 19:01:26 addons-864929 kubelet[1502]: E1027 19:01:26.335316    1502 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from image index: reading manifest sha256:4a35a7836fe08f340a42e25c4ac5eef4439585bbbb817b7bd28b2cd87c742642 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Oct 27 19:01:26 addons-864929 kubelet[1502]: E1027 19:01:26.336681    1502 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod test-local-path_default(4d1f2112-b21d-4876-abde-84c8de8078a0): ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:4a35a7836fe08f340a42e25c4ac5eef4439585bbbb817b7bd28b2cd87c742642 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 27 19:01:26 addons-864929 kubelet[1502]: E1027 19:01:26.336752    1502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ErrImagePull: \"fetching target platform image selected from image index: reading manifest sha256:4a35a7836fe08f340a42e25c4ac5eef4439585bbbb817b7bd28b2cd87c742642 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="4d1f2112-b21d-4876-abde-84c8de8078a0"
	Oct 27 19:01:28 addons-864929 kubelet[1502]: I1027 19:01:28.305760    1502 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 19:01:35 addons-864929 kubelet[1502]: E1027 19:01:35.745644    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761591695745139343  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:552224}  inodes_used:{value:191}}"
	Oct 27 19:01:35 addons-864929 kubelet[1502]: E1027 19:01:35.745695    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761591695745139343  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:552224}  inodes_used:{value:191}}"
	Oct 27 19:01:37 addons-864929 kubelet[1502]: E1027 19:01:37.316893    1502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:4a35a7836fe08f340a42e25c4ac5eef4439585bbbb817b7bd28b2cd87c742642 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="4d1f2112-b21d-4876-abde-84c8de8078a0"
	Oct 27 19:01:37 addons-864929 kubelet[1502]: I1027 19:01:37.337834    1502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpvrk\" (UniqueName: \"kubernetes.io/projected/ce19a12f-43e8-4993-a64c-ef90bd25467c-kube-api-access-xpvrk\") pod \"hello-world-app-5d498dc89-wmhrh\" (UID: \"ce19a12f-43e8-4993-a64c-ef90bd25467c\") " pod="default/hello-world-app-5d498dc89-wmhrh"
	
	
	==> storage-provisioner [9580ed2258f1ddc819f6b60b3c2ef2524bf0b58aa70e0aff2439347be11df4e9] <==
	W1027 19:01:15.075033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:17.078585       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:17.086216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:19.091132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:19.097316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:21.102015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:21.107828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:23.111418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:23.119028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:25.123037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:25.127813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:27.131187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:27.138895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:29.142122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:29.150373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:31.154264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:31.160302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:33.164760       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:33.173548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:35.177777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:35.183499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:37.193814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:37.204516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:39.211262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:39.217952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-864929 -n addons-864929
helpers_test.go:269: (dbg) Run:  kubectl --context addons-864929 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-wmhrh task-pv-pod test-local-path ingress-nginx-admission-create-4qkgj ingress-nginx-admission-patch-2ll76
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-864929 describe pod hello-world-app-5d498dc89-wmhrh task-pv-pod test-local-path ingress-nginx-admission-create-4qkgj ingress-nginx-admission-patch-2ll76
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-864929 describe pod hello-world-app-5d498dc89-wmhrh task-pv-pod test-local-path ingress-nginx-admission-create-4qkgj ingress-nginx-admission-patch-2ll76: exit status 1 (91.089875ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-wmhrh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-864929/192.168.39.216
	Start Time:       Mon, 27 Oct 2025 19:01:37 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xpvrk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xpvrk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-wmhrh to addons-864929
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-864929/192.168.39.216
	Start Time:       Mon, 27 Oct 2025 18:59:31 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6h8cn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-6h8cn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  2m8s                default-scheduler  Successfully assigned default/task-pv-pod to addons-864929
	  Warning  Failed     73s                 kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     73s                 kubelet            Error: ErrImagePull
	  Normal   BackOff    73s                 kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     73s                 kubelet            Error: ImagePullBackOff
	  Normal   Pulling    60s (x2 over 2m7s)  kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-864929/192.168.39.216
	Start Time:       Mon, 27 Oct 2025 18:59:25 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mgjnr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-mgjnr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  2m14s                default-scheduler  Successfully assigned default/test-local-path to addons-864929
	  Warning  Failed     104s                 kubelet            Failed to pull image "busybox:stable": reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    89s (x2 over 2m14s)  kubelet            Pulling image "busybox:stable"
	  Warning  Failed     13s (x2 over 104s)   kubelet            Error: ErrImagePull
	  Warning  Failed     13s                  kubelet            Failed to pull image "busybox:stable": fetching target platform image selected from image index: reading manifest sha256:4a35a7836fe08f340a42e25c4ac5eef4439585bbbb817b7bd28b2cd87c742642 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2s (x2 over 103s)    kubelet            Back-off pulling image "busybox:stable"
	  Warning  Failed     2s (x2 over 103s)    kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-4qkgj" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2ll76" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-864929 describe pod hello-world-app-5d498dc89-wmhrh task-pv-pod test-local-path ingress-nginx-admission-create-4qkgj ingress-nginx-admission-patch-2ll76: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-864929 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-864929 addons disable ingress-dns --alsologtostderr -v=1: (1.494403139s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-864929 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-864929 addons disable ingress --alsologtostderr -v=1: (7.823883248s)
--- FAIL: TestAddons/parallel/Ingress (157.39s)

                                                
                                    
x
+
TestAddons/parallel/CSI (383.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1027 18:59:18.224863   62705 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1027 18:59:18.241103   62705 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1027 18:59:18.241134   62705 kapi.go:107] duration metric: took 16.288658ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 16.297758ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-864929 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864929 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-864929 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [504b682e-4d7e-4f98-913e-efaa9ccfd4a1] Pending
helpers_test.go:352: "task-pv-pod" [504b682e-4d7e-4f98-913e-efaa9ccfd4a1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:337: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:567: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:567: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-864929 -n addons-864929
addons_test.go:567: TestAddons/parallel/CSI: showing logs for failed pods as of 2025-10-27 19:05:31.810659614 +0000 UTC m=+559.735201720
addons_test.go:567: (dbg) Run:  kubectl --context addons-864929 describe po task-pv-pod -n default
addons_test.go:567: (dbg) kubectl --context addons-864929 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-864929/192.168.39.216
Start Time:       Mon, 27 Oct 2025 18:59:31 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.30
IPs:
IP:  10.244.0.30
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP (http-server)
Host Port:      0/TCP (http-server)
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6h8cn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-6h8cn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  6m                   default-scheduler  Successfully assigned default/task-pv-pod to addons-864929
Normal   Pulling    75s (x4 over 5m59s)  kubelet            Pulling image "docker.io/nginx"
Warning  Failed     20s (x4 over 5m5s)   kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     20s (x4 over 5m5s)   kubelet            Error: ErrImagePull
Normal   BackOff    5s (x6 over 5m5s)    kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     5s (x6 over 5m5s)    kubelet            Error: ImagePullBackOff
addons_test.go:567: (dbg) Run:  kubectl --context addons-864929 logs task-pv-pod -n default
addons_test.go:567: (dbg) Non-zero exit: kubectl --context addons-864929 logs task-pv-pod -n default: exit status 1 (75.559164ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
addons_test.go:567: kubectl --context addons-864929 logs task-pv-pod -n default: exit status 1
addons_test.go:568: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/CSI]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-864929 -n addons-864929
helpers_test.go:252: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-864929 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-864929 logs -n 25: (1.233973701s)
helpers_test.go:260: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-021762                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-021762 │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ delete  │ -p download-only-343850                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-343850 │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ start   │ --download-only -p binary-mirror-001257 --alsologtostderr --binary-mirror http://127.0.0.1:33585 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-001257 │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	│ delete  │ -p binary-mirror-001257                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-001257 │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ addons  │ enable dashboard -p addons-864929                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	│ addons  │ disable dashboard -p addons-864929                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	│ start   │ -p addons-864929 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:58 UTC │
	│ addons  │ addons-864929 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:58 UTC │ 27 Oct 25 18:58 UTC │
	│ addons  │ addons-864929 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:58 UTC │ 27 Oct 25 18:59 UTC │
	│ addons  │ enable headlamp -p addons-864929 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ addons  │ addons-864929 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ addons  │ addons-864929 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ addons  │ addons-864929 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-864929                                                                                                                                                                                                                                                                                                                                                                                         │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ addons  │ addons-864929 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ addons  │ addons-864929 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ ip      │ addons-864929 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ addons  │ addons-864929 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ ssh     │ addons-864929 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │                     │
	│ addons  │ addons-864929 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ addons  │ addons-864929 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ ip      │ addons-864929 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 19:01 UTC │ 27 Oct 25 19:01 UTC │
	│ addons  │ addons-864929 addons disable ingress-dns --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 19:01 UTC │ 27 Oct 25 19:01 UTC │
	│ addons  │ addons-864929 addons disable ingress --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 19:01 UTC │ 27 Oct 25 19:01 UTC │
	│ addons  │ addons-864929 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 19:02 UTC │ 27 Oct 25 19:03 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 18:56:24
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 18:56:24.622422   63277 out.go:360] Setting OutFile to fd 1 ...
	I1027 18:56:24.622686   63277 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:56:24.622698   63277 out.go:374] Setting ErrFile to fd 2...
	I1027 18:56:24.622702   63277 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:56:24.622910   63277 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-58821/.minikube/bin
	I1027 18:56:24.623413   63277 out.go:368] Setting JSON to false
	I1027 18:56:24.624309   63277 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5935,"bootTime":1761585450,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 18:56:24.624396   63277 start.go:141] virtualization: kvm guest
	I1027 18:56:24.626201   63277 out.go:179] * [addons-864929] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 18:56:24.627811   63277 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 18:56:24.627823   63277 notify.go:220] Checking for updates...
	I1027 18:56:24.630357   63277 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 18:56:24.631602   63277 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-58821/kubeconfig
	I1027 18:56:24.632948   63277 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-58821/.minikube
	I1027 18:56:24.634382   63277 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 18:56:24.635581   63277 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 18:56:24.637140   63277 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 18:56:24.668548   63277 out.go:179] * Using the kvm2 driver based on user configuration
	I1027 18:56:24.669928   63277 start.go:305] selected driver: kvm2
	I1027 18:56:24.669964   63277 start.go:925] validating driver "kvm2" against <nil>
	I1027 18:56:24.669977   63277 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 18:56:24.670794   63277 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1027 18:56:24.671024   63277 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 18:56:24.671068   63277 cni.go:84] Creating CNI manager for ""
	I1027 18:56:24.671115   63277 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 18:56:24.671129   63277 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1027 18:56:24.671178   63277 start.go:349] cluster config:
	{Name:addons-864929 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-864929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1027 18:56:24.671272   63277 iso.go:125] acquiring lock: {Name:mkbd04910579486806c142a651be4f82498c73ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 18:56:24.672823   63277 out.go:179] * Starting "addons-864929" primary control-plane node in "addons-864929" cluster
	I1027 18:56:24.674049   63277 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 18:56:24.674093   63277 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-58821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 18:56:24.674104   63277 cache.go:58] Caching tarball of preloaded images
	I1027 18:56:24.674220   63277 preload.go:233] Found /home/jenkins/minikube-integration/21801-58821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 18:56:24.674236   63277 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 18:56:24.674548   63277 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/config.json ...
	I1027 18:56:24.674571   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/config.json: {Name:mk9ba1259c08877b5975916a854db91dcc4ee818 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:24.674732   63277 start.go:360] acquireMachinesLock for addons-864929: {Name:mk93a855054c8dcf81931234082a94fdc68a4726 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1027 18:56:24.674798   63277 start.go:364] duration metric: took 48.986µs to acquireMachinesLock for "addons-864929"
	I1027 18:56:24.674823   63277 start.go:93] Provisioning new machine with config: &{Name:addons-864929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-864929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 18:56:24.674873   63277 start.go:125] createHost starting for "" (driver="kvm2")
	I1027 18:56:24.676393   63277 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1027 18:56:24.676558   63277 start.go:159] libmachine.API.Create for "addons-864929" (driver="kvm2")
	I1027 18:56:24.676590   63277 client.go:168] LocalClient.Create starting
	I1027 18:56:24.676678   63277 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem
	I1027 18:56:24.780202   63277 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/cert.pem
	I1027 18:56:24.900124   63277 main.go:141] libmachine: creating domain...
	I1027 18:56:24.900145   63277 main.go:141] libmachine: creating network...
	I1027 18:56:24.901617   63277 main.go:141] libmachine: found existing default network
	I1027 18:56:24.901796   63277 main.go:141] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1027 18:56:24.902284   63277 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d609d0}
	I1027 18:56:24.902387   63277 main.go:141] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-864929</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1027 18:56:24.908158   63277 main.go:141] libmachine: creating private network mk-addons-864929 192.168.39.0/24...
	I1027 18:56:24.980252   63277 main.go:141] libmachine: private network mk-addons-864929 192.168.39.0/24 created
	I1027 18:56:24.980545   63277 main.go:141] libmachine: <network>
	  <name>mk-addons-864929</name>
	  <uuid>aef0d375-daa4-4865-b6ed-55a30809a7b8</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:71:bd:45'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1027 18:56:24.980576   63277 main.go:141] libmachine: setting up store path in /home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929 ...
	I1027 18:56:24.980605   63277 main.go:141] libmachine: building disk image from file:///home/jenkins/minikube-integration/21801-58821/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1027 18:56:24.980620   63277 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21801-58821/.minikube
	I1027 18:56:24.980717   63277 main.go:141] libmachine: Downloading /home/jenkins/minikube-integration/21801-58821/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21801-58821/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
	I1027 18:56:25.217277   63277 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa...
	I1027 18:56:25.365950   63277 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/addons-864929.rawdisk...
	I1027 18:56:25.365998   63277 main.go:141] libmachine: Writing magic tar header
	I1027 18:56:25.366060   63277 main.go:141] libmachine: Writing SSH key tar header
	I1027 18:56:25.366173   63277 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929 ...
	I1027 18:56:25.366260   63277 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929
	I1027 18:56:25.366305   63277 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929 (perms=drwx------)
	I1027 18:56:25.366334   63277 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21801-58821/.minikube/machines
	I1027 18:56:25.366351   63277 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21801-58821/.minikube/machines (perms=drwxr-xr-x)
	I1027 18:56:25.366370   63277 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21801-58821/.minikube
	I1027 18:56:25.366382   63277 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21801-58821/.minikube (perms=drwxr-xr-x)
	I1027 18:56:25.366392   63277 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21801-58821
	I1027 18:56:25.366400   63277 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21801-58821 (perms=drwxrwxr-x)
	I1027 18:56:25.366413   63277 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1027 18:56:25.366429   63277 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1027 18:56:25.366447   63277 main.go:141] libmachine: checking permissions on dir: /home/jenkins
	I1027 18:56:25.366462   63277 main.go:141] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1027 18:56:25.366477   63277 main.go:141] libmachine: checking permissions on dir: /home
	I1027 18:56:25.366489   63277 main.go:141] libmachine: skipping /home - not owner
	I1027 18:56:25.366496   63277 main.go:141] libmachine: defining domain...
	I1027 18:56:25.367845   63277 main.go:141] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-864929</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/addons-864929.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-864929'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1027 18:56:25.373162   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:b7:94:cf in network default
	I1027 18:56:25.374053   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:25.374075   63277 main.go:141] libmachine: starting domain...
	I1027 18:56:25.374080   63277 main.go:141] libmachine: ensuring networks are active...
	I1027 18:56:25.374872   63277 main.go:141] libmachine: Ensuring network default is active
	I1027 18:56:25.375277   63277 main.go:141] libmachine: Ensuring network mk-addons-864929 is active
	I1027 18:56:25.375873   63277 main.go:141] libmachine: getting domain XML...
	I1027 18:56:25.376860   63277 main.go:141] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-864929</name>
	  <uuid>780db33d-391d-49ad-b77a-2a509bc06274</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/addons-864929.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:f3:30:05'/>
	      <source network='mk-addons-864929'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:b7:94:cf'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1027 18:56:26.638954   63277 main.go:141] libmachine: waiting for domain to start...
	I1027 18:56:26.640594   63277 main.go:141] libmachine: domain is now running
	I1027 18:56:26.640612   63277 main.go:141] libmachine: waiting for IP...
	I1027 18:56:26.641493   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:26.642006   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:26.642018   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:26.642278   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:26.642335   63277 retry.go:31] will retry after 204.12408ms: waiting for domain to come up
	I1027 18:56:26.847933   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:26.848726   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:26.848744   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:26.849096   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:26.849145   63277 retry.go:31] will retry after 259.734271ms: waiting for domain to come up
	I1027 18:56:27.110506   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:27.111193   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:27.111211   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:27.111565   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:27.111600   63277 retry.go:31] will retry after 353.747338ms: waiting for domain to come up
	I1027 18:56:27.467217   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:27.467990   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:27.468008   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:27.468404   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:27.468443   63277 retry.go:31] will retry after 408.188052ms: waiting for domain to come up
	I1027 18:56:27.877925   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:27.878585   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:27.878600   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:27.878986   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:27.879025   63277 retry.go:31] will retry after 584.807504ms: waiting for domain to come up
	I1027 18:56:28.465800   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:28.466457   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:28.466477   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:28.466925   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:28.466985   63277 retry.go:31] will retry after 655.104002ms: waiting for domain to come up
	I1027 18:56:29.123804   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:29.124507   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:29.124524   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:29.124825   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:29.124862   63277 retry.go:31] will retry after 1.151715647s: waiting for domain to come up
	I1027 18:56:30.278089   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:30.278736   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:30.278753   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:30.279106   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:30.279148   63277 retry.go:31] will retry after 899.383524ms: waiting for domain to come up
	I1027 18:56:31.180495   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:31.181365   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:31.181386   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:31.181743   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:31.181784   63277 retry.go:31] will retry after 1.154847749s: waiting for domain to come up
	I1027 18:56:32.337959   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:32.338631   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:32.338648   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:32.339016   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:32.339058   63277 retry.go:31] will retry after 1.618753171s: waiting for domain to come up
	I1027 18:56:33.960150   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:33.960873   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:33.960906   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:33.961382   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:33.961433   63277 retry.go:31] will retry after 2.574218898s: waiting for domain to come up
	I1027 18:56:36.537741   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:36.538394   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:36.538410   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:36.538756   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:36.538790   63277 retry.go:31] will retry after 3.021550252s: waiting for domain to come up
	I1027 18:56:39.563948   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:39.564552   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:39.564573   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:39.564876   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:39.564921   63277 retry.go:31] will retry after 3.629212065s: waiting for domain to come up
	I1027 18:56:43.197968   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.198898   63277 main.go:141] libmachine: domain addons-864929 has current primary IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.198915   63277 main.go:141] libmachine: found domain IP: 192.168.39.216
	I1027 18:56:43.198925   63277 main.go:141] libmachine: reserving static IP address...
	I1027 18:56:43.199329   63277 main.go:141] libmachine: unable to find host DHCP lease matching {name: "addons-864929", mac: "52:54:00:f3:30:05", ip: "192.168.39.216"} in network mk-addons-864929
	I1027 18:56:43.451430   63277 main.go:141] libmachine: reserved static IP address 192.168.39.216 for domain addons-864929
	I1027 18:56:43.451477   63277 main.go:141] libmachine: waiting for SSH...
	I1027 18:56:43.451483   63277 main.go:141] libmachine: Getting to WaitForSSH function...
	I1027 18:56:43.455019   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.455546   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:43.455575   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.455753   63277 main.go:141] libmachine: Using SSH client type: native
	I1027 18:56:43.456085   63277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1027 18:56:43.456098   63277 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1027 18:56:43.560285   63277 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 18:56:43.560764   63277 main.go:141] libmachine: domain creation complete
	I1027 18:56:43.562456   63277 machine.go:93] provisionDockerMachine start ...
	I1027 18:56:43.564923   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.565392   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:43.565416   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.565609   63277 main.go:141] libmachine: Using SSH client type: native
	I1027 18:56:43.565938   63277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1027 18:56:43.565959   63277 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 18:56:43.669544   63277 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1027 18:56:43.669580   63277 buildroot.go:166] provisioning hostname "addons-864929"
	I1027 18:56:43.672967   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.673411   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:43.673440   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.673604   63277 main.go:141] libmachine: Using SSH client type: native
	I1027 18:56:43.673806   63277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1027 18:56:43.673817   63277 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-864929 && echo "addons-864929" | sudo tee /etc/hostname
	I1027 18:56:43.795625   63277 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-864929
	
	I1027 18:56:43.798861   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.799296   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:43.799317   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.799492   63277 main.go:141] libmachine: Using SSH client type: native
	I1027 18:56:43.799700   63277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1027 18:56:43.799715   63277 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-864929' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-864929/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-864929' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 18:56:43.910892   63277 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 18:56:43.910939   63277 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21801-58821/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-58821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-58821/.minikube}
	I1027 18:56:43.910981   63277 buildroot.go:174] setting up certificates
	I1027 18:56:43.910994   63277 provision.go:84] configureAuth start
	I1027 18:56:43.913915   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.914336   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:43.914362   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.916504   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.916890   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:43.916954   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.917128   63277 provision.go:143] copyHostCerts
	I1027 18:56:43.917210   63277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-58821/.minikube/ca.pem (1078 bytes)
	I1027 18:56:43.917348   63277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-58821/.minikube/cert.pem (1123 bytes)
	I1027 18:56:43.917476   63277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-58821/.minikube/key.pem (1675 bytes)
	I1027 18:56:43.917558   63277 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-58821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca-key.pem org=jenkins.addons-864929 san=[127.0.0.1 192.168.39.216 addons-864929 localhost minikube]
	I1027 18:56:44.249940   63277 provision.go:177] copyRemoteCerts
	I1027 18:56:44.250009   63277 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 18:56:44.252895   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.253468   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:44.253497   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.253713   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:56:44.336145   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 18:56:44.366470   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1027 18:56:44.396879   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 18:56:44.427777   63277 provision.go:87] duration metric: took 516.764566ms to configureAuth
	I1027 18:56:44.427808   63277 buildroot.go:189] setting minikube options for container-runtime
	I1027 18:56:44.428052   63277 config.go:182] Loaded profile config "addons-864929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:56:44.430830   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.431257   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:44.431285   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.431516   63277 main.go:141] libmachine: Using SSH client type: native
	I1027 18:56:44.431741   63277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1027 18:56:44.431759   63277 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 18:56:44.684141   63277 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 18:56:44.684169   63277 machine.go:96] duration metric: took 1.121694006s to provisionDockerMachine
	I1027 18:56:44.684180   63277 client.go:171] duration metric: took 20.007583494s to LocalClient.Create
	I1027 18:56:44.684313   63277 start.go:167] duration metric: took 20.00763875s to libmachine.API.Create "addons-864929"
	I1027 18:56:44.684443   63277 start.go:293] postStartSetup for "addons-864929" (driver="kvm2")
	I1027 18:56:44.684457   63277 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 18:56:44.684684   63277 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 18:56:44.687967   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.688366   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:44.688388   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.688532   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:56:44.773838   63277 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 18:56:44.779587   63277 info.go:137] Remote host: Buildroot 2025.02
	I1027 18:56:44.779618   63277 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-58821/.minikube/addons for local assets ...
	I1027 18:56:44.779720   63277 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-58821/.minikube/files for local assets ...
	I1027 18:56:44.779744   63277 start.go:296] duration metric: took 95.294071ms for postStartSetup
	I1027 18:56:44.783531   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.783956   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:44.783992   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.784296   63277 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/config.json ...
	I1027 18:56:44.784513   63277 start.go:128] duration metric: took 20.109628328s to createHost
	I1027 18:56:44.787202   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.787607   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:44.787630   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.787827   63277 main.go:141] libmachine: Using SSH client type: native
	I1027 18:56:44.788095   63277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1027 18:56:44.788112   63277 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1027 18:56:44.892155   63277 main.go:141] libmachine: SSH cmd err, output: <nil>: 1761591404.854722623
	
	I1027 18:56:44.892187   63277 fix.go:216] guest clock: 1761591404.854722623
	I1027 18:56:44.892195   63277 fix.go:229] Guest: 2025-10-27 18:56:44.854722623 +0000 UTC Remote: 2025-10-27 18:56:44.784525373 +0000 UTC m=+20.209597039 (delta=70.19725ms)
	I1027 18:56:44.892213   63277 fix.go:200] guest clock delta is within tolerance: 70.19725ms
	I1027 18:56:44.892218   63277 start.go:83] releasing machines lock for "addons-864929", held for 20.217407876s
	I1027 18:56:44.895316   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.895759   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:44.895786   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.896530   63277 ssh_runner.go:195] Run: cat /version.json
	I1027 18:56:44.896625   63277 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 18:56:44.899743   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.899867   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.900211   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:44.900246   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.900407   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:44.900437   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.900431   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:56:44.900649   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:56:44.976028   63277 ssh_runner.go:195] Run: systemctl --version
	I1027 18:56:45.001174   63277 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 18:56:45.161871   63277 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 18:56:45.169373   63277 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 18:56:45.169442   63277 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 18:56:45.190185   63277 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1027 18:56:45.190215   63277 start.go:495] detecting cgroup driver to use...
	I1027 18:56:45.190307   63277 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 18:56:45.209752   63277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 18:56:45.232403   63277 docker.go:218] disabling cri-docker service (if available) ...
	I1027 18:56:45.232474   63277 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 18:56:45.253470   63277 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 18:56:45.271232   63277 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 18:56:45.419310   63277 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 18:56:45.638393   63277 docker.go:234] disabling docker service ...
	I1027 18:56:45.638482   63277 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 18:56:45.655615   63277 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 18:56:45.671872   63277 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 18:56:45.833201   63277 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 18:56:45.978905   63277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 18:56:45.995588   63277 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 18:56:46.019765   63277 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 18:56:46.019841   63277 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:46.033497   63277 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 18:56:46.033570   63277 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:46.047513   63277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:46.060521   63277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:46.074441   63277 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 18:56:46.088325   63277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:46.101213   63277 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:46.122423   63277 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:46.135007   63277 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 18:56:46.146221   63277 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1027 18:56:46.146284   63277 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1027 18:56:46.169839   63277 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 18:56:46.183407   63277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 18:56:46.324987   63277 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 18:56:46.440290   63277 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 18:56:46.440374   63277 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 18:56:46.446158   63277 start.go:563] Will wait 60s for crictl version
	I1027 18:56:46.446240   63277 ssh_runner.go:195] Run: which crictl
	I1027 18:56:46.450614   63277 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1027 18:56:46.496013   63277 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1027 18:56:46.496113   63277 ssh_runner.go:195] Run: crio --version
	I1027 18:56:46.526418   63277 ssh_runner.go:195] Run: crio --version
	I1027 18:56:46.560428   63277 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1027 18:56:46.564607   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:46.565084   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:46.565113   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:46.565366   63277 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1027 18:56:46.570158   63277 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 18:56:46.586255   63277 kubeadm.go:883] updating cluster {Name:addons-864929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-864929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.216 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 18:56:46.586379   63277 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 18:56:46.586431   63277 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 18:56:46.623555   63277 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1027 18:56:46.623625   63277 ssh_runner.go:195] Run: which lz4
	I1027 18:56:46.628237   63277 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1027 18:56:46.633510   63277 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1027 18:56:46.633544   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1027 18:56:48.156071   63277 crio.go:462] duration metric: took 1.527888186s to copy over tarball
	I1027 18:56:48.156150   63277 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1027 18:56:49.783875   63277 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.627696709s)
	I1027 18:56:49.783899   63277 crio.go:469] duration metric: took 1.627800498s to extract the tarball
	I1027 18:56:49.783908   63277 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1027 18:56:49.829229   63277 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 18:56:49.875294   63277 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 18:56:49.875323   63277 cache_images.go:85] Images are preloaded, skipping loading
	I1027 18:56:49.875334   63277 kubeadm.go:934] updating node { 192.168.39.216 8443 v1.34.1 crio true true} ...
	I1027 18:56:49.875442   63277 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-864929 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.216
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-864929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 18:56:49.875581   63277 ssh_runner.go:195] Run: crio config
	I1027 18:56:49.932154   63277 cni.go:84] Creating CNI manager for ""
	I1027 18:56:49.932179   63277 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 18:56:49.932200   63277 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 18:56:49.932223   63277 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.216 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-864929 NodeName:addons-864929 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.216"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.216 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 18:56:49.932364   63277 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.216
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-864929"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.216"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.216"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 18:56:49.932437   63277 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 18:56:49.945627   63277 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 18:56:49.945703   63277 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 18:56:49.959045   63277 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1027 18:56:49.983292   63277 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 18:56:50.007675   63277 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1027 18:56:50.032663   63277 ssh_runner.go:195] Run: grep 192.168.39.216	control-plane.minikube.internal$ /etc/hosts
	I1027 18:56:50.037426   63277 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.216	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 18:56:50.053663   63277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 18:56:50.200983   63277 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 18:56:50.242073   63277 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929 for IP: 192.168.39.216
	I1027 18:56:50.242097   63277 certs.go:195] generating shared ca certs ...
	I1027 18:56:50.242119   63277 certs.go:227] acquiring lock for ca certs: {Name:mk3c1c890b4611f9f1a3f97b9046837227a16799 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:50.242309   63277 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-58821/.minikube/ca.key
	I1027 18:56:50.542245   63277 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-58821/.minikube/ca.crt ...
	I1027 18:56:50.542277   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/ca.crt: {Name:mkb0b7411ce05946b9a6d920de38fad3ab6c6a5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:50.542460   63277 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-58821/.minikube/ca.key ...
	I1027 18:56:50.542471   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/ca.key: {Name:mk283eb2e002819e788fa8f18c386299d47777a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:50.542548   63277 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.key
	I1027 18:56:50.638160   63277 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.crt ...
	I1027 18:56:50.638191   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.crt: {Name:mk8a0909df9310cadf02928e1cc040e0903818db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:50.638365   63277 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.key ...
	I1027 18:56:50.638377   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.key: {Name:mk4aa59bab040235f70f65aa2d7af7f89bd4659d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:50.638460   63277 certs.go:257] generating profile certs ...
	I1027 18:56:50.638519   63277 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.key
	I1027 18:56:50.638549   63277 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt with IP's: []
	I1027 18:56:50.779809   63277 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt ...
	I1027 18:56:50.779847   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: {Name:mka2b9867ee328b7112768834356aaca6b5fc109 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:50.780044   63277 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.key ...
	I1027 18:56:50.780059   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.key: {Name:mkcbab4e1e83774a62e689c6d7789d3eb343f864 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:50.780139   63277 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.key.782a817d
	I1027 18:56:50.780161   63277 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.crt.782a817d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.216]
	I1027 18:56:51.313872   63277 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.crt.782a817d ...
	I1027 18:56:51.313911   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.crt.782a817d: {Name:mk4942a380088e956850812de28b65602aee81bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:51.314117   63277 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.key.782a817d ...
	I1027 18:56:51.314132   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.key.782a817d: {Name:mk2bf51af3cc29c0e7479b746ffe650e8b348547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:51.314226   63277 certs.go:382] copying /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.crt.782a817d -> /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.crt
	I1027 18:56:51.314298   63277 certs.go:386] copying /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.key.782a817d -> /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.key
	I1027 18:56:51.314355   63277 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/proxy-client.key
	I1027 18:56:51.314373   63277 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/proxy-client.crt with IP's: []
	I1027 18:56:51.489257   63277 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/proxy-client.crt ...
	I1027 18:56:51.489292   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/proxy-client.crt: {Name:mk6be1958bd7a086d707056124a43ee705cf8efa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:51.489483   63277 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/proxy-client.key ...
	I1027 18:56:51.489496   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/proxy-client.key: {Name:mkedbe974c66eb2183a2d8824fcd1a064e7f0629 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:51.489667   63277 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca-key.pem (1679 bytes)
	I1027 18:56:51.489699   63277 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem (1078 bytes)
	I1027 18:56:51.489734   63277 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/cert.pem (1123 bytes)
	I1027 18:56:51.489756   63277 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/key.pem (1675 bytes)
	I1027 18:56:51.490337   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 18:56:51.527261   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 18:56:51.566595   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 18:56:51.597942   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1027 18:56:51.630829   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1027 18:56:51.664688   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 18:56:51.696594   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 18:56:51.734852   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 18:56:51.770778   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 18:56:51.805559   63277 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 18:56:51.833421   63277 ssh_runner.go:195] Run: openssl version
	I1027 18:56:51.841743   63277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 18:56:51.857852   63277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 18:56:51.864612   63277 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1027 18:56:51.864680   63277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 18:56:51.873224   63277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 18:56:51.893213   63277 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 18:56:51.899405   63277 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 18:56:51.899464   63277 kubeadm.go:400] StartCluster: {Name:addons-864929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-864929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.216 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 18:56:51.899550   63277 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 18:56:51.899604   63277 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 18:56:51.945935   63277 cri.go:89] found id: ""
	I1027 18:56:51.946016   63277 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 18:56:51.959289   63277 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 18:56:51.972387   63277 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 18:56:51.985164   63277 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 18:56:51.985182   63277 kubeadm.go:157] found existing configuration files:
	
	I1027 18:56:51.985239   63277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 18:56:51.997222   63277 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 18:56:51.997284   63277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 18:56:52.010322   63277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 18:56:52.022203   63277 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 18:56:52.022274   63277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 18:56:52.034805   63277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 18:56:52.046201   63277 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 18:56:52.046272   63277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 18:56:52.059475   63277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 18:56:52.070876   63277 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 18:56:52.070957   63277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 18:56:52.083713   63277 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1027 18:56:52.243337   63277 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 18:57:05.929419   63277 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1027 18:57:05.929514   63277 kubeadm.go:318] [preflight] Running pre-flight checks
	I1027 18:57:05.929629   63277 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 18:57:05.929750   63277 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 18:57:05.929840   63277 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 18:57:05.929894   63277 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 18:57:05.931664   63277 out.go:252]   - Generating certificates and keys ...
	I1027 18:57:05.931750   63277 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1027 18:57:05.931835   63277 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1027 18:57:05.931942   63277 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 18:57:05.932018   63277 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1027 18:57:05.932119   63277 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1027 18:57:05.932200   63277 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1027 18:57:05.932269   63277 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1027 18:57:05.932432   63277 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-864929 localhost] and IPs [192.168.39.216 127.0.0.1 ::1]
	I1027 18:57:05.932514   63277 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1027 18:57:05.932685   63277 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-864929 localhost] and IPs [192.168.39.216 127.0.0.1 ::1]
	I1027 18:57:05.932782   63277 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 18:57:05.932893   63277 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 18:57:05.932942   63277 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1027 18:57:05.932998   63277 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 18:57:05.933056   63277 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 18:57:05.933116   63277 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 18:57:05.933163   63277 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 18:57:05.933242   63277 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 18:57:05.933312   63277 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 18:57:05.933416   63277 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 18:57:05.933518   63277 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 18:57:05.934838   63277 out.go:252]   - Booting up control plane ...
	I1027 18:57:05.934938   63277 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 18:57:05.935072   63277 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 18:57:05.935153   63277 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 18:57:05.935254   63277 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 18:57:05.935331   63277 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 18:57:05.935413   63277 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 18:57:05.935480   63277 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 18:57:05.935513   63277 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1027 18:57:05.935618   63277 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 18:57:05.935705   63277 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 18:57:05.935754   63277 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.502502542s
	I1027 18:57:05.935827   63277 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 18:57:05.935892   63277 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.216:8443/livez
	I1027 18:57:05.935992   63277 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 18:57:05.936113   63277 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 18:57:05.936221   63277 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.069256284s
	I1027 18:57:05.936298   63277 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.735103952s
	I1027 18:57:05.936363   63277 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.003425011s
	I1027 18:57:05.936455   63277 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 18:57:05.936590   63277 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 18:57:05.936648   63277 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 18:57:05.936807   63277 kubeadm.go:318] [mark-control-plane] Marking the node addons-864929 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 18:57:05.936859   63277 kubeadm.go:318] [bootstrap-token] Using token: s2v11a.htd6rq4ivxisd01i
	I1027 18:57:05.938605   63277 out.go:252]   - Configuring RBAC rules ...
	I1027 18:57:05.938701   63277 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 18:57:05.938793   63277 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 18:57:05.938934   63277 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 18:57:05.939090   63277 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 18:57:05.939208   63277 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 18:57:05.939282   63277 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 18:57:05.939396   63277 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 18:57:05.939437   63277 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1027 18:57:05.939494   63277 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1027 18:57:05.939501   63277 kubeadm.go:318] 
	I1027 18:57:05.939571   63277 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1027 18:57:05.939578   63277 kubeadm.go:318] 
	I1027 18:57:05.939688   63277 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1027 18:57:05.939702   63277 kubeadm.go:318] 
	I1027 18:57:05.939738   63277 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1027 18:57:05.939802   63277 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 18:57:05.939870   63277 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 18:57:05.939883   63277 kubeadm.go:318] 
	I1027 18:57:05.939933   63277 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1027 18:57:05.939939   63277 kubeadm.go:318] 
	I1027 18:57:05.939985   63277 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 18:57:05.939991   63277 kubeadm.go:318] 
	I1027 18:57:05.940048   63277 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1027 18:57:05.940134   63277 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 18:57:05.940215   63277 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 18:57:05.940222   63277 kubeadm.go:318] 
	I1027 18:57:05.940329   63277 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 18:57:05.940400   63277 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1027 18:57:05.940406   63277 kubeadm.go:318] 
	I1027 18:57:05.940470   63277 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token s2v11a.htd6rq4ivxisd01i \
	I1027 18:57:05.940553   63277 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ab9d04ec7d88165f854ca6007f0db50cb21d439f87063d47c1cf645e122a460a \
	I1027 18:57:05.940572   63277 kubeadm.go:318] 	--control-plane 
	I1027 18:57:05.940578   63277 kubeadm.go:318] 
	I1027 18:57:05.940643   63277 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1027 18:57:05.940649   63277 kubeadm.go:318] 
	I1027 18:57:05.940731   63277 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token s2v11a.htd6rq4ivxisd01i \
	I1027 18:57:05.940833   63277 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ab9d04ec7d88165f854ca6007f0db50cb21d439f87063d47c1cf645e122a460a 
	I1027 18:57:05.940844   63277 cni.go:84] Creating CNI manager for ""
	I1027 18:57:05.940851   63277 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 18:57:05.943012   63277 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1027 18:57:05.944248   63277 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1027 18:57:05.965148   63277 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1027 18:57:05.989594   63277 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 18:57:05.989700   63277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:05.989727   63277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-864929 minikube.k8s.io/updated_at=2025_10_27T18_57_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f minikube.k8s.io/name=addons-864929 minikube.k8s.io/primary=true
	I1027 18:57:06.017183   63277 ops.go:34] apiserver oom_adj: -16
	I1027 18:57:06.172167   63277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:06.672287   63277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:07.173180   63277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:07.673264   63277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:08.172481   63277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:08.672997   63277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:09.173247   63277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:09.672863   63277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:10.172654   63277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:10.270470   63277 kubeadm.go:1113] duration metric: took 4.280852325s to wait for elevateKubeSystemPrivileges
	I1027 18:57:10.270507   63277 kubeadm.go:402] duration metric: took 18.371048599s to StartCluster
	I1027 18:57:10.270544   63277 settings.go:142] acquiring lock: {Name:mk19a39086427cb47b9bb78fd0b5176c91a751d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:10.270695   63277 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-58821/kubeconfig
	I1027 18:57:10.271083   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/kubeconfig: {Name:mk90c4d883178b7191d62a8cd99434bc24dd555f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:10.271332   63277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 18:57:10.271363   63277 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.216 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 18:57:10.271434   63277 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1027 18:57:10.271577   63277 addons.go:69] Setting yakd=true in profile "addons-864929"
	I1027 18:57:10.271588   63277 addons.go:69] Setting inspektor-gadget=true in profile "addons-864929"
	I1027 18:57:10.271607   63277 addons.go:238] Setting addon yakd=true in "addons-864929"
	I1027 18:57:10.271624   63277 addons.go:238] Setting addon inspektor-gadget=true in "addons-864929"
	I1027 18:57:10.271619   63277 addons.go:69] Setting default-storageclass=true in profile "addons-864929"
	I1027 18:57:10.271636   63277 addons.go:69] Setting registry-creds=true in profile "addons-864929"
	I1027 18:57:10.271644   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.271653   63277 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-864929"
	I1027 18:57:10.271661   63277 addons.go:69] Setting metrics-server=true in profile "addons-864929"
	I1027 18:57:10.271672   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.271678   63277 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-864929"
	I1027 18:57:10.271688   63277 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-864929"
	I1027 18:57:10.271662   63277 addons.go:69] Setting ingress=true in profile "addons-864929"
	I1027 18:57:10.271718   63277 addons.go:238] Setting addon ingress=true in "addons-864929"
	I1027 18:57:10.271723   63277 addons.go:69] Setting registry=true in profile "addons-864929"
	I1027 18:57:10.271735   63277 addons.go:238] Setting addon registry=true in "addons-864929"
	I1027 18:57:10.271751   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.271781   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.271779   63277 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-864929"
	I1027 18:57:10.271801   63277 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-864929"
	I1027 18:57:10.272335   63277 addons.go:69] Setting ingress-dns=true in profile "addons-864929"
	I1027 18:57:10.272359   63277 addons.go:238] Setting addon ingress-dns=true in "addons-864929"
	I1027 18:57:10.272388   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.272662   63277 addons.go:69] Setting storage-provisioner=true in profile "addons-864929"
	I1027 18:57:10.272684   63277 addons.go:238] Setting addon storage-provisioner=true in "addons-864929"
	I1027 18:57:10.272703   63277 addons.go:69] Setting volcano=true in profile "addons-864929"
	I1027 18:57:10.272719   63277 addons.go:69] Setting volumesnapshots=true in profile "addons-864929"
	I1027 18:57:10.272728   63277 addons.go:238] Setting addon volcano=true in "addons-864929"
	I1027 18:57:10.272731   63277 addons.go:238] Setting addon volumesnapshots=true in "addons-864929"
	I1027 18:57:10.272747   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.272709   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.271622   63277 config.go:182] Loaded profile config "addons-864929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:57:10.272915   63277 addons.go:69] Setting cloud-spanner=true in profile "addons-864929"
	I1027 18:57:10.272937   63277 addons.go:238] Setting addon cloud-spanner=true in "addons-864929"
	I1027 18:57:10.271674   63277 addons.go:238] Setting addon metrics-server=true in "addons-864929"
	I1027 18:57:10.272967   63277 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-864929"
	I1027 18:57:10.272979   63277 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-864929"
	I1027 18:57:10.272994   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.272962   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.271656   63277 addons.go:238] Setting addon registry-creds=true in "addons-864929"
	I1027 18:57:10.273353   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.271719   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.273804   63277 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-864929"
	I1027 18:57:10.272992   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.273875   63277 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-864929"
	I1027 18:57:10.273876   63277 out.go:179] * Verifying Kubernetes components...
	I1027 18:57:10.273923   63277 addons.go:69] Setting gcp-auth=true in profile "addons-864929"
	I1027 18:57:10.273943   63277 mustload.go:65] Loading cluster: addons-864929
	I1027 18:57:10.272753   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.273912   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.274165   63277 config.go:182] Loaded profile config "addons-864929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:57:10.275351   63277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 18:57:10.280649   63277 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-864929"
	I1027 18:57:10.280696   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.280648   63277 addons.go:238] Setting addon default-storageclass=true in "addons-864929"
	I1027 18:57:10.280792   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.281457   63277 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1027 18:57:10.281464   63277 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1027 18:57:10.281464   63277 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1027 18:57:10.281472   63277 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1027 18:57:10.281475   63277 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1027 18:57:10.282784   63277 host.go:66] Checking if "addons-864929" exists ...
	W1027 18:57:10.283458   63277 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1027 18:57:10.284391   63277 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 18:57:10.284413   63277 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 18:57:10.284781   63277 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1027 18:57:10.284783   63277 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1027 18:57:10.284784   63277 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 18:57:10.284827   63277 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1027 18:57:10.285656   63277 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1027 18:57:10.285668   63277 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1027 18:57:10.285667   63277 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1027 18:57:10.286196   63277 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1027 18:57:10.285679   63277 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1027 18:57:10.285694   63277 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1027 18:57:10.285702   63277 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1027 18:57:10.285712   63277 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1027 18:57:10.285727   63277 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1027 18:57:10.285763   63277 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1027 18:57:10.286475   63277 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1027 18:57:10.287027   63277 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1027 18:57:10.287211   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1027 18:57:10.286535   63277 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1027 18:57:10.287783   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1027 18:57:10.287353   63277 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 18:57:10.287361   63277 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 18:57:10.288659   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 18:57:10.287367   63277 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1027 18:57:10.288803   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1027 18:57:10.288238   63277 out.go:179]   - Using image docker.io/registry:3.0.0
	I1027 18:57:10.288289   63277 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1027 18:57:10.289099   63277 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1027 18:57:10.289112   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1027 18:57:10.288511   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.289111   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1027 18:57:10.289229   63277 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1027 18:57:10.289243   63277 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1027 18:57:10.289702   63277 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1027 18:57:10.289754   63277 out.go:179]   - Using image docker.io/busybox:stable
	I1027 18:57:10.289812   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1027 18:57:10.290341   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.290649   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.291093   63277 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1027 18:57:10.291547   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.292319   63277 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 18:57:10.292764   63277 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1027 18:57:10.292901   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1027 18:57:10.293496   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.294077   63277 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1027 18:57:10.294199   63277 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1027 18:57:10.294235   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1027 18:57:10.294665   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.294862   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.294885   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.295658   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.296760   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.296778   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.296804   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.297672   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.298250   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.298288   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.298666   63277 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1027 18:57:10.298926   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.299336   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.299404   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.300642   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.301088   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.301156   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.301344   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.301372   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.301519   63277 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1027 18:57:10.301767   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.301854   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.302100   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.302209   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.302286   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.302408   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.302456   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.302745   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.302894   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.303125   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.303161   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.303130   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.303303   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.303406   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.303460   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.303507   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.303830   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.304098   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.304190   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.304220   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.304324   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.304342   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.304762   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.304791   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.304845   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.305002   63277 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1027 18:57:10.305135   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.305171   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.305224   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.305423   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.305445   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.305836   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.305863   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.306116   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.307841   63277 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1027 18:57:10.309061   63277 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1027 18:57:10.310163   63277 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1027 18:57:10.310201   63277 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1027 18:57:10.312870   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.313280   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.313301   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.313464   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	W1027 18:57:10.535116   63277 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54864->192.168.39.216:22: read: connection reset by peer
	I1027 18:57:10.535158   63277 retry.go:31] will retry after 369.415138ms: ssh: handshake failed: read tcp 192.168.39.1:54864->192.168.39.216:22: read: connection reset by peer
	W1027 18:57:10.541619   63277 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54870->192.168.39.216:22: read: connection reset by peer
	I1027 18:57:10.541652   63277 retry.go:31] will retry after 219.162578ms: ssh: handshake failed: read tcp 192.168.39.1:54870->192.168.39.216:22: read: connection reset by peer
	I1027 18:57:10.985109   63277 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1027 18:57:10.985150   63277 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1027 18:57:11.132247   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1027 18:57:11.138615   63277 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1027 18:57:11.138646   63277 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1027 18:57:11.143955   63277 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1027 18:57:11.143981   63277 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1027 18:57:11.155121   63277 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1027 18:57:11.155156   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1027 18:57:11.157384   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1027 18:57:11.170100   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1027 18:57:11.321437   63277 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:11.321472   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1027 18:57:11.329006   63277 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.057632855s)
	I1027 18:57:11.329090   63277 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.053707515s)
	I1027 18:57:11.329177   63277 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 18:57:11.329278   63277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 18:57:11.351194   63277 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1027 18:57:11.351228   63277 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1027 18:57:11.372537   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 18:57:11.394769   63277 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1027 18:57:11.394810   63277 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1027 18:57:11.396018   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1027 18:57:11.456333   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 18:57:11.584380   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1027 18:57:11.712662   63277 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1027 18:57:11.712687   63277 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1027 18:57:11.735201   63277 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1027 18:57:11.735231   63277 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1027 18:57:11.839761   63277 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1027 18:57:11.839788   63277 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1027 18:57:11.900683   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:12.042980   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1027 18:57:12.058451   63277 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1027 18:57:12.058490   63277 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1027 18:57:12.070398   63277 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1027 18:57:12.070429   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1027 18:57:12.354158   63277 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1027 18:57:12.354199   63277 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1027 18:57:12.362109   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1027 18:57:12.365612   63277 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1027 18:57:12.365648   63277 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1027 18:57:12.438920   63277 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1027 18:57:12.438943   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1027 18:57:12.700463   63277 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1027 18:57:12.700490   63277 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1027 18:57:12.700500   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1027 18:57:12.840634   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1027 18:57:12.856064   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1027 18:57:12.902734   63277 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1027 18:57:12.902762   63277 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1027 18:57:13.137669   63277 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1027 18:57:13.137698   63277 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1027 18:57:13.351985   63277 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1027 18:57:13.352016   63277 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1027 18:57:13.596268   63277 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1027 18:57:13.596294   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1027 18:57:13.714853   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.582551362s)
	I1027 18:57:13.850557   63277 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1027 18:57:13.850595   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1027 18:57:14.071067   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1027 18:57:14.389873   63277 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1027 18:57:14.389897   63277 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1027 18:57:14.901480   63277 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1027 18:57:14.901504   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1027 18:57:15.349961   63277 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1027 18:57:15.349990   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1027 18:57:15.716286   63277 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1027 18:57:15.716315   63277 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1027 18:57:16.040523   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1027 18:57:17.129847   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.972419936s)
	I1027 18:57:17.129872   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.959737081s)
	I1027 18:57:17.129940   63277 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.80062985s)
	I1027 18:57:17.129973   63277 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1027 18:57:17.129951   63277 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.800751256s)
	I1027 18:57:17.130902   63277 node_ready.go:35] waiting up to 6m0s for node "addons-864929" to be "Ready" ...
	I1027 18:57:17.155377   63277 node_ready.go:49] node "addons-864929" is "Ready"
	I1027 18:57:17.155425   63277 node_ready.go:38] duration metric: took 24.493356ms for node "addons-864929" to be "Ready" ...
	I1027 18:57:17.155441   63277 api_server.go:52] waiting for apiserver process to appear ...
	I1027 18:57:17.155509   63277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 18:57:17.249988   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.877396986s)
	I1027 18:57:17.250062   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.854018331s)
	I1027 18:57:17.250127   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.793752896s)
	I1027 18:57:17.250185   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.665778222s)
	I1027 18:57:17.686081   63277 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-864929" context rescaled to 1 replicas
	I1027 18:57:17.769830   63277 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1027 18:57:17.773614   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:17.774163   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:17.774193   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:17.774409   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:17.835030   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.934303033s)
	W1027 18:57:17.835104   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:17.835131   63277 retry.go:31] will retry after 292.877887ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:18.055795   63277 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1027 18:57:18.104947   63277 addons.go:238] Setting addon gcp-auth=true in "addons-864929"
	I1027 18:57:18.105010   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:18.106942   63277 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1027 18:57:18.109558   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:18.110007   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:18.110059   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:18.110215   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:18.128649   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:19.900432   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.538276397s)
	I1027 18:57:19.900485   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.199953129s)
	I1027 18:57:19.900517   63277 addons.go:479] Verifying addon registry=true in "addons-864929"
	I1027 18:57:19.900644   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.059975315s)
	I1027 18:57:19.900669   63277 addons.go:479] Verifying addon metrics-server=true in "addons-864929"
	I1027 18:57:19.900741   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.044632307s)
	I1027 18:57:19.902352   63277 out.go:179] * Verifying registry addon...
	I1027 18:57:19.902350   63277 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-864929 service yakd-dashboard -n yakd-dashboard
	
	I1027 18:57:19.903449   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.860430632s)
	I1027 18:57:19.903482   63277 addons.go:479] Verifying addon ingress=true in "addons-864929"
	I1027 18:57:19.905028   63277 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1027 18:57:19.905292   63277 out.go:179] * Verifying ingress addon...
	I1027 18:57:19.907320   63277 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1027 18:57:19.958238   63277 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1027 18:57:19.958265   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:19.958292   63277 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1027 18:57:19.958311   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:20.433182   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:20.434585   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:20.543543   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.472423735s)
	W1027 18:57:20.543599   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1027 18:57:20.543627   63277 retry.go:31] will retry after 255.689771ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1027 18:57:20.800094   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1027 18:57:20.922952   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:20.923554   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:21.442922   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:21.442981   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:21.773578   63277 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.618035898s)
	I1027 18:57:21.773618   63277 api_server.go:72] duration metric: took 11.502220917s to wait for apiserver process to appear ...
	I1027 18:57:21.773628   63277 api_server.go:88] waiting for apiserver healthz status ...
	I1027 18:57:21.773654   63277 api_server.go:253] Checking apiserver healthz at https://192.168.39.216:8443/healthz ...
	I1027 18:57:21.774535   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.733957112s)
	I1027 18:57:21.774578   63277 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-864929"
	I1027 18:57:21.776672   63277 out.go:179] * Verifying csi-hostpath-driver addon...
	I1027 18:57:21.779451   63277 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1027 18:57:21.792875   63277 api_server.go:279] https://192.168.39.216:8443/healthz returned 200:
	ok
	I1027 18:57:21.806882   63277 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1027 18:57:21.806906   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:21.811185   63277 api_server.go:141] control plane version: v1.34.1
	I1027 18:57:21.811218   63277 api_server.go:131] duration metric: took 37.583056ms to wait for apiserver health ...
	I1027 18:57:21.811241   63277 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 18:57:21.837867   63277 system_pods.go:59] 20 kube-system pods found
	I1027 18:57:21.837924   63277 system_pods.go:61] "amd-gpu-device-plugin-zg4tw" [26b73888-1e70-456d-ab70-4392ce52af26] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1027 18:57:21.837935   63277 system_pods.go:61] "coredns-66bc5c9577-5v77t" [13dc8b33-a53f-4df7-8cea-be41471727fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 18:57:21.837946   63277 system_pods.go:61] "coredns-66bc5c9577-f8dfl" [7ada2d5f-c124-4130-8e4d-f5f6f0d2b856] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 18:57:21.837954   63277 system_pods.go:61] "csi-hostpath-attacher-0" [923a1bc3-3658-4c92-9ac7-f6fb7cc49fdc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 18:57:21.837960   63277 system_pods.go:61] "csi-hostpath-resizer-0" [2d2edb44-d6fd-41c7-aebc-45f7051be9b9] Pending
	I1027 18:57:21.837970   63277 system_pods.go:61] "csi-hostpathplugin-2kk6q" [4df09867-d21a-494d-b1c1-b33d1ae05292] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 18:57:21.837976   63277 system_pods.go:61] "etcd-addons-864929" [0423c9dd-5674-4e91-be68-a3255c87fce6] Running
	I1027 18:57:21.837982   63277 system_pods.go:61] "kube-apiserver-addons-864929" [b43be527-80f0-4d18-8362-54d51f1f3a19] Running
	I1027 18:57:21.837987   63277 system_pods.go:61] "kube-controller-manager-addons-864929" [f65a9a0f-0799-4414-87de-291236ac723d] Running
	I1027 18:57:21.837995   63277 system_pods.go:61] "kube-ingress-dns-minikube" [66c0967e-2aba-46db-9b8d-50afb9e508c4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 18:57:21.838001   63277 system_pods.go:61] "kube-proxy-5grdt" [73ab29d4-f3af-4942-87b0-5b146ec49fd2] Running
	I1027 18:57:21.838010   63277 system_pods.go:61] "kube-scheduler-addons-864929" [ac2cfd72-7a4b-46a5-b8fc-d1b7552feb30] Running
	I1027 18:57:21.838017   63277 system_pods.go:61] "metrics-server-85b7d694d7-7z96j" [332bcd8d-855b-409e-8a4c-c788da3ed019] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 18:57:21.838026   63277 system_pods.go:61] "nvidia-device-plugin-daemonset-dq69s" [7048c489-be31-4c98-a8ea-455c9506a937] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 18:57:21.838050   63277 system_pods.go:61] "registry-6b586f9694-wrthd" [cfcc8422-d46c-42b9-a799-37210505af96] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 18:57:21.838063   63277 system_pods.go:61] "registry-creds-764b6fb674-g7z85" [b7d5c5d1-64ba-4adf-b61a-42be8e53ce7e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 18:57:21.838072   63277 system_pods.go:61] "registry-proxy-6grgg" [80e2894b-b354-44d6-8c93-8c9a8f5ec644] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1027 18:57:21.838085   63277 system_pods.go:61] "snapshot-controller-7d9fbc56b8-9nfvf" [e133be4d-c9ac-45ee-8523-3197eb5ae1dd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:57:21.838099   63277 system_pods.go:61] "snapshot-controller-7d9fbc56b8-t78cg" [07e1f13e-a7d4-496f-9f63-f96306459e61] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:57:21.838111   63277 system_pods.go:61] "storage-provisioner" [1ec5b960-2f51-438a-9968-46e1bea6ddc7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 18:57:21.838126   63277 system_pods.go:74] duration metric: took 26.872544ms to wait for pod list to return data ...
	I1027 18:57:21.838141   63277 default_sa.go:34] waiting for default service account to be created ...
	I1027 18:57:21.867654   63277 default_sa.go:45] found service account: "default"
	I1027 18:57:21.867680   63277 default_sa.go:55] duration metric: took 29.532579ms for default service account to be created ...
	I1027 18:57:21.867689   63277 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 18:57:21.883210   63277 system_pods.go:86] 20 kube-system pods found
	I1027 18:57:21.883247   63277 system_pods.go:89] "amd-gpu-device-plugin-zg4tw" [26b73888-1e70-456d-ab70-4392ce52af26] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1027 18:57:21.883257   63277 system_pods.go:89] "coredns-66bc5c9577-5v77t" [13dc8b33-a53f-4df7-8cea-be41471727fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 18:57:21.883266   63277 system_pods.go:89] "coredns-66bc5c9577-f8dfl" [7ada2d5f-c124-4130-8e4d-f5f6f0d2b856] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 18:57:21.883272   63277 system_pods.go:89] "csi-hostpath-attacher-0" [923a1bc3-3658-4c92-9ac7-f6fb7cc49fdc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 18:57:21.883278   63277 system_pods.go:89] "csi-hostpath-resizer-0" [2d2edb44-d6fd-41c7-aebc-45f7051be9b9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1027 18:57:21.883294   63277 system_pods.go:89] "csi-hostpathplugin-2kk6q" [4df09867-d21a-494d-b1c1-b33d1ae05292] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 18:57:21.883301   63277 system_pods.go:89] "etcd-addons-864929" [0423c9dd-5674-4e91-be68-a3255c87fce6] Running
	I1027 18:57:21.883308   63277 system_pods.go:89] "kube-apiserver-addons-864929" [b43be527-80f0-4d18-8362-54d51f1f3a19] Running
	I1027 18:57:21.883313   63277 system_pods.go:89] "kube-controller-manager-addons-864929" [f65a9a0f-0799-4414-87de-291236ac723d] Running
	I1027 18:57:21.883326   63277 system_pods.go:89] "kube-ingress-dns-minikube" [66c0967e-2aba-46db-9b8d-50afb9e508c4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 18:57:21.883331   63277 system_pods.go:89] "kube-proxy-5grdt" [73ab29d4-f3af-4942-87b0-5b146ec49fd2] Running
	I1027 18:57:21.883339   63277 system_pods.go:89] "kube-scheduler-addons-864929" [ac2cfd72-7a4b-46a5-b8fc-d1b7552feb30] Running
	I1027 18:57:21.883347   63277 system_pods.go:89] "metrics-server-85b7d694d7-7z96j" [332bcd8d-855b-409e-8a4c-c788da3ed019] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 18:57:21.883358   63277 system_pods.go:89] "nvidia-device-plugin-daemonset-dq69s" [7048c489-be31-4c98-a8ea-455c9506a937] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 18:57:21.883365   63277 system_pods.go:89] "registry-6b586f9694-wrthd" [cfcc8422-d46c-42b9-a799-37210505af96] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 18:57:21.883372   63277 system_pods.go:89] "registry-creds-764b6fb674-g7z85" [b7d5c5d1-64ba-4adf-b61a-42be8e53ce7e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 18:57:21.883378   63277 system_pods.go:89] "registry-proxy-6grgg" [80e2894b-b354-44d6-8c93-8c9a8f5ec644] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1027 18:57:21.883383   63277 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9nfvf" [e133be4d-c9ac-45ee-8523-3197eb5ae1dd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:57:21.883388   63277 system_pods.go:89] "snapshot-controller-7d9fbc56b8-t78cg" [07e1f13e-a7d4-496f-9f63-f96306459e61] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:57:21.883393   63277 system_pods.go:89] "storage-provisioner" [1ec5b960-2f51-438a-9968-46e1bea6ddc7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 18:57:21.883404   63277 system_pods.go:126] duration metric: took 15.70908ms to wait for k8s-apps to be running ...
	I1027 18:57:21.883416   63277 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 18:57:21.883474   63277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 18:57:21.924022   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:21.927212   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:22.158899   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.03020142s)
	I1027 18:57:22.158954   63277 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.051987547s)
	W1027 18:57:22.158980   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:22.159006   63277 retry.go:31] will retry after 279.686083ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:22.160959   63277 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 18:57:22.162547   63277 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1027 18:57:22.164115   63277 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1027 18:57:22.164141   63277 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1027 18:57:22.261201   63277 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1027 18:57:22.261230   63277 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1027 18:57:22.288886   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:22.352572   63277 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1027 18:57:22.352609   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1027 18:57:22.439692   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:22.441909   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1027 18:57:22.481468   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:22.481666   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:22.788128   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:22.914985   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:22.915276   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:23.285377   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:23.418349   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:23.418666   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:23.583239   63277 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.699734966s)
	I1027 18:57:23.583281   63277 system_svc.go:56] duration metric: took 1.699860035s WaitForService to wait for kubelet
	I1027 18:57:23.583292   63277 kubeadm.go:586] duration metric: took 13.311893893s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 18:57:23.583319   63277 node_conditions.go:102] verifying NodePressure condition ...
	I1027 18:57:23.583423   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.783267207s)
	I1027 18:57:23.593344   63277 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1027 18:57:23.593372   63277 node_conditions.go:123] node cpu capacity is 2
	I1027 18:57:23.593391   63277 node_conditions.go:105] duration metric: took 10.067491ms to run NodePressure ...
	I1027 18:57:23.593404   63277 start.go:241] waiting for startup goroutines ...
	I1027 18:57:23.787519   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:23.924794   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:23.924888   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:24.290306   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.848359661s)
	I1027 18:57:24.291626   63277 addons.go:479] Verifying addon gcp-auth=true in "addons-864929"
	I1027 18:57:24.294508   63277 out.go:179] * Verifying gcp-auth addon...
	I1027 18:57:24.296641   63277 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1027 18:57:24.328761   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:24.328910   63277 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1027 18:57:24.328951   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:24.413802   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:24.416333   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:24.786549   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:24.805212   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:24.915701   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:24.921802   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:25.061422   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.621678381s)
	W1027 18:57:25.061478   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:25.061503   63277 retry.go:31] will retry after 804.946825ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:25.289162   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:25.301160   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:25.421590   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:25.423412   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:25.785953   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:25.802888   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:25.867047   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:25.919138   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:25.919440   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:26.286933   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:26.301794   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:26.417105   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:26.417267   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:26.785587   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:26.804169   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:26.908637   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:26.912996   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:27.288028   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:27.300864   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:27.412910   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:27.416533   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:27.456859   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.589752651s)
	W1027 18:57:27.456908   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:27.456932   63277 retry.go:31] will retry after 685.459936ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:27.784840   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:27.801850   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:27.910590   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:27.912874   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:28.143005   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:28.285631   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:28.300220   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:28.419303   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:28.422363   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:28.784623   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:28.802401   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:28.911601   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:28.915428   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:29.283493   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:29.300718   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:29.364540   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.221494949s)
	W1027 18:57:29.364577   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:29.364611   63277 retry.go:31] will retry after 1.757799431s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:29.416322   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:29.418953   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:29.787868   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:29.799055   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:29.910571   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:29.914273   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:30.286180   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:30.303999   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:30.413104   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:30.416370   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:30.787744   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:30.803419   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:30.916360   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:30.919438   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:31.122558   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:31.285676   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:31.301308   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:31.411868   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:31.412485   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:31.787290   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:31.802700   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:31.913644   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:31.915831   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:32.286432   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:32.304334   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:32.374445   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.251833687s)
	W1027 18:57:32.374511   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:32.374541   63277 retry.go:31] will retry after 2.78595925s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:32.416811   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:32.416913   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:32.785363   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:32.804140   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:32.915420   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:32.916567   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:33.292316   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:33.303111   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:33.464111   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:33.464335   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:33.784707   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:33.803523   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:33.909242   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:33.911455   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:34.303435   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:34.303506   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:34.413609   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:34.417021   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:34.784372   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:34.802229   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:34.911142   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:34.916104   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:35.161393   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:35.283283   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:35.301025   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:35.410195   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:35.416262   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:36.146770   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:36.157278   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:36.158333   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:36.158723   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:36.286639   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:36.300897   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:36.418783   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:36.423389   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:36.618778   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.457337067s)
	W1027 18:57:36.618824   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:36.618849   63277 retry.go:31] will retry after 2.808126494s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:36.785856   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:36.800053   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:36.911223   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:36.913610   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:37.283520   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:37.300915   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:37.411384   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:37.411564   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:37.783128   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:37.801353   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:37.908775   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:37.911143   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:38.284488   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:38.302812   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:38.423418   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:38.423531   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:38.784017   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:38.800264   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:38.911392   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:38.912809   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:39.284702   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:39.302513   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:39.414232   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:39.414350   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:39.427461   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:39.837291   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:39.837565   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:39.910765   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:39.914552   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:40.287903   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:40.301760   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:40.416079   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:40.416206   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:40.448955   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.021449854s)
	W1027 18:57:40.449007   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:40.449046   63277 retry.go:31] will retry after 2.389005779s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:40.785654   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:40.802757   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:40.913550   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:40.914781   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:41.286164   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:41.300417   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:41.408904   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:41.411315   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:41.783667   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:41.801000   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:41.911341   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:41.911526   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:42.283379   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:42.300298   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:42.413464   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:42.413759   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:42.784936   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:42.801747   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:42.838978   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:42.914433   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:42.915753   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:43.284491   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:43.306054   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:43.410133   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:43.414779   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:43.787454   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:43.802514   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:43.914613   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:43.915563   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:44.044025   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.205001809s)
	W1027 18:57:44.044086   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:44.044113   63277 retry.go:31] will retry after 6.569226607s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:44.286635   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:44.301882   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:44.420149   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:44.420239   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:44.786772   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:44.801152   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:44.907893   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:44.912659   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:45.282844   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:45.299210   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:45.408847   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:45.415564   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:45.785932   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:45.799703   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:45.910796   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:45.912722   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:46.284380   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:46.300262   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:46.411586   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:46.413618   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:46.785774   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:46.802487   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:46.909401   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:46.911157   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:47.285427   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:47.301018   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:47.411570   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:47.415374   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:47.784426   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:47.800958   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:47.909404   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:47.911321   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:48.285898   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:48.301526   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:48.409153   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:48.420016   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:48.784072   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:48.799905   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:48.910147   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:48.911420   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:49.283552   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:49.301303   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:49.413410   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:49.413468   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:49.785136   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:49.803428   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:49.912135   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:49.918025   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:50.284843   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:50.300698   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:50.417847   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:50.418870   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:50.614173   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:50.785558   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:50.803089   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:50.912911   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:50.914476   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:51.285211   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:51.299828   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:51.410597   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:51.417162   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:51.760476   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.146250047s)
	W1027 18:57:51.760537   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:51.760566   63277 retry.go:31] will retry after 8.458351618s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:51.788367   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:51.802674   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:51.912952   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:51.915907   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:52.284979   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:52.302620   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:52.417553   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:52.422725   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:52.785476   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:52.801653   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:52.911126   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:52.911882   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:53.286067   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:53.300801   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:53.418960   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:53.420629   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:53.851794   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:53.853714   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:53.922918   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:53.923746   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:54.287898   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:54.302372   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:54.425848   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:54.426641   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:54.792214   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:54.801130   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:54.915252   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:54.915642   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:55.283583   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:55.304005   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:55.408097   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:55.413323   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:55.784488   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:55.806326   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:55.913127   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:55.915413   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:56.427055   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:56.427252   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:56.427310   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:56.428375   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:56.787593   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:56.888446   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:56.912008   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:56.913074   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:57.288183   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:57.305878   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:57.417164   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:57.418270   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:57.784210   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:57.802894   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:57.909720   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:57.912051   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:58.285258   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:58.300454   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:58.412828   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:58.414479   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:58.784411   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:58.801492   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:58.911089   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:58.912058   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:59.283993   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:59.299989   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:59.412668   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:59.419029   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:59.784705   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:59.804623   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:59.909691   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:59.912501   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:00.220065   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:58:00.284147   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:00.302108   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:00.416685   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:00.418642   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:00.786304   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:00.803095   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:00.911931   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:00.915399   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:01.286093   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:01.301584   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:01.412443   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:01.414896   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:01.458011   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.237894856s)
	W1027 18:58:01.458080   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:01.458103   63277 retry.go:31] will retry after 16.405228739s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:01.784222   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:01.803092   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:01.908661   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:01.910814   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:02.284729   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:02.302770   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:02.414874   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:02.414965   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:02.789864   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:02.800637   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:02.914649   63277 kapi.go:107] duration metric: took 43.009618954s to wait for kubernetes.io/minikube-addons=registry ...
	I1027 18:58:02.914893   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:03.286072   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:03.299857   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:03.418386   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:03.791799   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:03.803302   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:03.914538   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:04.286257   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:04.302605   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:04.416367   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:04.783206   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:04.867278   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:04.911899   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:05.285072   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:05.300843   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:05.414023   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:05.785545   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:05.803246   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:05.924390   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:06.284685   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:06.301604   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:06.415639   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:06.786150   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:06.886295   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:06.912165   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:07.284913   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:07.302714   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:07.412538   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:07.787904   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:07.801832   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:07.911724   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:08.282968   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:08.300993   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:08.414821   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:08.786690   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:08.803923   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:08.911877   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:09.297222   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:09.301996   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:09.422572   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:09.788150   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:09.805824   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:09.913774   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:10.293390   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:10.305508   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:10.420862   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:10.792615   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:10.802761   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:10.912280   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:11.288594   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:11.306089   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:11.417798   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:11.787690   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:11.802673   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:11.912590   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:12.284220   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:12.308323   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:12.414975   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:12.787839   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:12.800833   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:12.915221   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:13.540620   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:13.543249   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:13.543347   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:13.788031   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:13.805504   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:13.912643   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:14.288515   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:14.303121   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:14.425413   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:14.786082   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:14.800338   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:14.911089   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:15.290704   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:15.300954   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:15.415781   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:15.785268   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:15.801079   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:15.914809   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:16.284643   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:16.301478   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:16.425519   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:16.783788   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:16.802402   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:16.916061   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:17.289294   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:17.307167   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:17.426377   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:17.784384   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:17.800170   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:17.864299   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:58:17.913670   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:18.286332   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:18.302108   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:18.413514   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:18.786024   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:18.802816   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:18.911079   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:19.285445   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:19.389432   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:19.439230   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.57487824s)
	W1027 18:58:19.439294   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:19.439322   63277 retry.go:31] will retry after 19.626476762s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:19.486856   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:19.786120   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:19.806643   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:19.910901   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:20.287756   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:20.302427   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:20.418486   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:20.783960   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:20.800528   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:20.913267   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:21.285594   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:21.302211   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:21.420494   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:21.786759   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:21.804159   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:21.912377   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:22.283620   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:22.301149   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:22.427642   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:22.783574   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:22.802410   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:22.914836   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:23.288209   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:23.303010   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:23.421096   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:23.789207   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:23.808143   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:23.911641   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:24.286064   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:24.303547   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:24.425719   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:24.792130   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:24.801495   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:24.913750   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:25.289935   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:25.305864   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:25.432159   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:25.784691   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:25.803435   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:25.912224   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:26.285500   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:26.301355   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:26.418759   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:26.785783   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:26.810515   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:26.912606   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:27.284842   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:27.300596   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:27.415566   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:27.787354   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:27.800995   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:27.912310   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:28.284479   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:28.303281   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:28.419682   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:28.789550   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:28.800133   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:28.915291   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:29.288142   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:29.302992   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:29.418531   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:29.785066   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:29.800998   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:29.911612   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:30.287335   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:30.300823   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:30.414607   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:30.785353   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:30.801683   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:30.914771   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:31.286892   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:31.309512   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:31.413660   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:31.784745   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:31.804007   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:31.914073   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:32.285574   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:32.302369   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:32.415432   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:32.787607   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:32.801278   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:32.912924   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:33.286454   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:33.300583   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:33.413776   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:33.790802   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:33.808782   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:33.912972   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:34.286709   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:34.304110   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:34.420826   63277 kapi.go:107] duration metric: took 1m14.513497503s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1027 18:58:34.786102   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:34.801992   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:35.285498   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:35.301550   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:35.784165   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:35.800807   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:36.284911   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:36.299796   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:36.788910   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:36.804143   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:37.284496   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:37.302139   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:37.785508   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:37.802879   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:38.286869   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:38.300852   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:38.786222   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:38.804588   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:39.066915   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:58:39.318253   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:39.410241   63277 kapi.go:107] duration metric: took 1m15.113592039s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1027 18:58:39.412086   63277 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-864929 cluster.
	I1027 18:58:39.413383   63277 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1027 18:58:39.414377   63277 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1027 18:58:39.785506   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:40.146885   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.0799187s)
	W1027 18:58:40.146963   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1027 18:58:40.147096   63277 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1027 18:58:40.287330   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:40.782964   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:41.285147   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:41.783255   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:42.286213   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:42.785272   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:43.282878   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:43.789437   63277 kapi.go:107] duration metric: took 1m22.009986905s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1027 18:58:43.791464   63277 out.go:179] * Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, storage-provisioner, amd-gpu-device-plugin, registry-creds, default-storageclass, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1027 18:58:43.792829   63277 addons.go:514] duration metric: took 1m33.521403387s for enable addons: enabled=[nvidia-device-plugin ingress-dns cloud-spanner storage-provisioner amd-gpu-device-plugin registry-creds default-storageclass metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1027 18:58:43.792875   63277 start.go:246] waiting for cluster config update ...
	I1027 18:58:43.792913   63277 start.go:255] writing updated cluster config ...
	I1027 18:58:43.793226   63277 ssh_runner.go:195] Run: rm -f paused
	I1027 18:58:43.802235   63277 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 18:58:43.806653   63277 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-f8dfl" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:43.812431   63277 pod_ready.go:94] pod "coredns-66bc5c9577-f8dfl" is "Ready"
	I1027 18:58:43.812452   63277 pod_ready.go:86] duration metric: took 5.764753ms for pod "coredns-66bc5c9577-f8dfl" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:43.816160   63277 pod_ready.go:83] waiting for pod "etcd-addons-864929" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:43.821965   63277 pod_ready.go:94] pod "etcd-addons-864929" is "Ready"
	I1027 18:58:43.821993   63277 pod_ready.go:86] duration metric: took 5.807724ms for pod "etcd-addons-864929" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:43.824005   63277 pod_ready.go:83] waiting for pod "kube-apiserver-addons-864929" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:43.828898   63277 pod_ready.go:94] pod "kube-apiserver-addons-864929" is "Ready"
	I1027 18:58:43.828923   63277 pod_ready.go:86] duration metric: took 4.897075ms for pod "kube-apiserver-addons-864929" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:43.830643   63277 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-864929" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:44.207152   63277 pod_ready.go:94] pod "kube-controller-manager-addons-864929" is "Ready"
	I1027 18:58:44.207194   63277 pod_ready.go:86] duration metric: took 376.531709ms for pod "kube-controller-manager-addons-864929" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:44.415720   63277 pod_ready.go:83] waiting for pod "kube-proxy-5grdt" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:44.807579   63277 pod_ready.go:94] pod "kube-proxy-5grdt" is "Ready"
	I1027 18:58:44.807611   63277 pod_ready.go:86] duration metric: took 391.860267ms for pod "kube-proxy-5grdt" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:45.008299   63277 pod_ready.go:83] waiting for pod "kube-scheduler-addons-864929" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:45.409571   63277 pod_ready.go:94] pod "kube-scheduler-addons-864929" is "Ready"
	I1027 18:58:45.409599   63277 pod_ready.go:86] duration metric: took 401.265666ms for pod "kube-scheduler-addons-864929" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:45.409611   63277 pod_ready.go:40] duration metric: took 1.607328787s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 18:58:45.455187   63277 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 18:58:45.457073   63277 out.go:179] * Done! kubectl is now configured to use "addons-864929" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 19:05:32 addons-864929 crio[816]: time="2025-10-27 19:05:32.654747694Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:750f263ef26428763cf6b0e145e880c9cc04be8f3139c343ec49c6b28652ca7e,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d498dc89-wmhrh,Uid:ce19a12f-43e8-4993-a64c-ef90bd25467c,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591697565205855,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d498dc89-wmhrh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ce19a12f-43e8-4993-a64c-ef90bd25467c,pod-template-hash: 5d498dc89,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T19:01:37.245283681Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c16d36466f25d5b4ba3c3eff1817f9578774b155cbb1d756d783ab5c93bdd8c8,Metadata:&PodSandboxMetadata{Name:task-pv-pod,Uid:504b682e-4d7e-4f98-913e-efaa9ccfd4a1,Namespace:default,Attempt:
0,},State:SANDBOX_READY,CreatedAt:1761591571893064553,Labels:map[string]string{app: task-pv-pod,io.kubernetes.container.name: POD,io.kubernetes.pod.name: task-pv-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 504b682e-4d7e-4f98-913e-efaa9ccfd4a1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T18:59:31.573804320Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:570f94482126cb74d7ad40a4629c41914e618364f2b2d8303dde39e5aec6705e,Metadata:&PodSandboxMetadata{Name:test-local-path,Uid:4d1f2112-b21d-4876-abde-84c8de8078a0,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591565602023786,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d1f2112-b21d-4876-abde-84c8de8078a0,run: test-local-path,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\
":{},\"labels\":{\"run\":\"test-local-path\"},\"name\":\"test-local-path\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"command\":[\"sh\",\"-c\",\"echo 'local-path-provisioner' \\u003e /test/file1\"],\"image\":\"busybox:stable\",\"name\":\"busybox\",\"volumeMounts\":[{\"mountPath\":\"/test\",\"name\":\"data\"}]}],\"restartPolicy\":\"OnFailure\",\"volumes\":[{\"name\":\"data\",\"persistentVolumeClaim\":{\"claimName\":\"test-pvc\"}}]}}\n,kubernetes.io/config.seen: 2025-10-27T18:59:25.280938920Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0e930ac960395a1fe60ce33b3d0d23e5074c5bcf2cfcf870738b45425fc094f3,Metadata:&PodSandboxMetadata{Name:nginx,Uid:9e5f3a97-dcd1-44e6-920b-2953ee6ba066,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591552620410485,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e5f3a97-dcd1-44e6-920b-2953ee6ba066,run: nginx,},Annotations:map[string
]string{kubernetes.io/config.seen: 2025-10-27T18:59:12.265024398Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2e4a1f88f6c72c5d32f4d9fa16c7245440698c2e6c6940465c848ea8e3c1de72,Metadata:&PodSandboxMetadata{Name:busybox,Uid:a668ad58-4082-4722-a352-3bd62c30df9b,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591526377152878,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a668ad58-4082-4722-a352-3bd62c30df9b,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T18:58:46.053221959Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&PodSandboxMetadata{Name:csi-hostpathplugin-2kk6q,Uid:4df09867-d21a-494d-b1c1-b33d1ae05292,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591444549557083,Labels:map[string]string{addonmanager.kub
ernetes.io/mode: Reconcile,app.kubernetes.io/component: plugin,app.kubernetes.io/instance: hostpath.csi.k8s.io,app.kubernetes.io/name: csi-hostpathplugin,app.kubernetes.io/part-of: csi-driver-host-path,controller-revision-hash: bfd669d76,io.kubernetes.container.name: POD,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,kubernetes.io/minikube-addons: csi-hostpath-driver,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T18:57:21.559716524Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d3fe0c8c9df1bf22576a6f62d4487ebe483778329f044aaf12442f36aefee1c9,Metadata:&PodSandboxMetadata{Name:csi-hostpath-resizer-0,Uid:2d2edb44-d6fd-41c7-aebc-45f7051be9b9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591443520871953,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,app.kubernetes.io/name: csi-hostpath-resizer,apps.k
ubernetes.io/pod-index: 0,controller-revision-hash: csi-hostpath-resizer-5f4978ffc6,io.kubernetes.container.name: POD,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2edb44-d6fd-41c7-aebc-45f7051be9b9,kubernetes.io/minikube-addons: csi-hostpath-driver,statefulset.kubernetes.io/pod-name: csi-hostpath-resizer-0,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T18:57:21.778463177Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6429ac3aeaf4cf12d6b687f73be67ce1eb08e0da208ef1272ae3514a20ed0c84,Metadata:&PodSandboxMetadata{Name:csi-hostpath-attacher-0,Uid:923a1bc3-3658-4c92-9ac7-f6fb7cc49fdc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591443518217408,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,app.kubernetes.io/name: csi-hostpath-attacher,apps.kubernetes.io/pod-index: 0,controller-revision-hash: csi-hostpath-attacher-576bccf57,io.kubernetes.container.name: POD,io.k
ubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 923a1bc3-3658-4c92-9ac7-f6fb7cc49fdc,kubernetes.io/minikube-addons: csi-hostpath-driver,statefulset.kubernetes.io/pod-name: csi-hostpath-attacher-0,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T18:57:21.415845940Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6813db443ad42f30ede5948e121225fd273a9767846892f5abfd1c7e67717754,Metadata:&PodSandboxMetadata{Name:snapshot-controller-7d9fbc56b8-9nfvf,Uid:e133be4d-c9ac-45ee-8523-3197eb5ae1dd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591442796934443,Labels:map[string]string{app: snapshot-controller,io.kubernetes.container.name: POD,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-9nfvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e133be4d-c9ac-45ee-8523-3197eb5ae1dd,pod-template-hash: 7d9fbc56b8,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T18:
57:20.574861188Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:41f0fc88d88a67c53a3bd864e17466349b4c7cd2b1545a854eefac2ca9cec7a5,Metadata:&PodSandboxMetadata{Name:snapshot-controller-7d9fbc56b8-t78cg,Uid:07e1f13e-a7d4-496f-9f63-f96306459e61,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591442124536348,Labels:map[string]string{app: snapshot-controller,io.kubernetes.container.name: POD,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-t78cg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07e1f13e-a7d4-496f-9f63-f96306459e61,pod-template-hash: 7d9fbc56b8,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T18:57:20.625280940Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a34c89c3d97f4534833c02b1092cbe693acbc1d81b74de61458ce121608460c7,Metadata:&PodSandboxMetadata{Name:gadget-5bx7q,Uid:ef4b0394-4dee-4b23-bee8-0787117f056f,Namespace:gadget,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591439173395101,Labels:map[s
tring]string{controller-revision-hash: d797fcb64,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gadget-5bx7q,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: ef4b0394-4dee-4b23-bee8-0787117f056f,k8s-app: gadget,pod-template-generation: 1,},Annotations:map[string]string{container.apparmor.security.beta.kubernetes.io/gadget: unconfined,kubernetes.io/config.seen: 2025-10-27T18:57:18.154239705Z,kubernetes.io/config.source: api,prometheus.io/path: /metrics,prometheus.io/port: 2223,prometheus.io/scrape: true,},RuntimeHandler:,},&PodSandbox{Id:d0de4be78d27d9e94647775771a19ac1751580111ae3739d05c71953c1faf14a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1ec5b960-2f51-438a-9968-46e1bea6ddc7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591438118533974,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: 1ec5b960-2f51-438a-9968-46e1bea6ddc7,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-10-27T18:57:17.233514543Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:eb20897f30dfb36e8f3c34ea19074dcb418adfec59e5f0b0a7e7d7001d52924b,Metadata:&PodSandbox
Metadata{Name:amd-gpu-device-plugin-zg4tw,Uid:26b73888-1e70-456d-ab70-4392ce52af26,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591434330085879,Labels:map[string]string{controller-revision-hash: 7f87d6fd8d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-zg4tw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26b73888-1e70-456d-ab70-4392ce52af26,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T18:57:13.971229357Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1549458dc06ee22d63cae83ec65fb1b67f7fe3dd07b0cb035e9908c6a184cd2d,Metadata:&PodSandboxMetadata{Name:kube-proxy-5grdt,Uid:73ab29d4-f3af-4942-87b0-5b146ec49fd2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591430883415022,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kub
e-proxy-5grdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ab29d4-f3af-4942-87b0-5b146ec49fd2,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T18:57:09.924411526Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a87aa3850ab80908c43af3f2bbb9eca022489f0530ec2b8899475a9ac087e88d,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-f8dfl,Uid:7ada2d5f-c124-4130-8e4d-f5f6f0d2b856,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591430862291960,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-f8dfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ada2d5f-c124-4130-8e4d-f5f6f0d2b856,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T18:57:10.498142658Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:af045b669200a98e83828b7038b0ba1371f3f501
d38a1aaf2a24eaffe8481851,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-864929,Uid:4738620b04d3027787daeded7d8de7c7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591418359487933,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4738620b04d3027787daeded7d8de7c7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4738620b04d3027787daeded7d8de7c7,kubernetes.io/config.seen: 2025-10-27T18:56:57.270034320Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:34da1388827880873125688a0ac800d701cef134bf76ff2b7101d97c3570ac69,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-864929,Uid:8f4246ed8c9b2f11e40ac4ed620904b3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591418358937051,Labels:map[string]string{component: kube-scheduler,io.kubernetes.
container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4246ed8c9b2f11e40ac4ed620904b3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8f4246ed8c9b2f11e40ac4ed620904b3,kubernetes.io/config.seen: 2025-10-27T18:56:57.270044804Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c5570e67c7a56294b118428e679ff8b66f3a3e9b719b89e2b9dfb87dfa3f95f5,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-864929,Uid:2de27a2c807a456567dcafd8f96dd732,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591418356553066,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de27a2c807a456567dcafd8f96dd732,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.216
:8443,kubernetes.io/config.hash: 2de27a2c807a456567dcafd8f96dd732,kubernetes.io/config.seen: 2025-10-27T18:56:57.270032461Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d582ed9677d49ebcc2ef56ec9d4db2cd633d5a4f0d9dbfb7d9840888bee96671,Metadata:&PodSandboxMetadata{Name:etcd-addons-864929,Uid:853670e29e0053cd2968e4d42e8dcd57,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591418347207777,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853670e29e0053cd2968e4d42e8dcd57,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.216:2379,kubernetes.io/config.hash: 853670e29e0053cd2968e4d42e8dcd57,kubernetes.io/config.seen: 2025-10-27T18:56:57.270026443Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=d8051055-13b3-4749-be43-68
424ecd327e name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 27 19:05:32 addons-864929 crio[816]: time="2025-10-27 19:05:32.657351848Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81ddf01e-5604-4d39-86ad-69f61d172974 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:05:32 addons-864929 crio[816]: time="2025-10-27 19:05:32.657468792Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81ddf01e-5604-4d39-86ad-69f61d172974 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:05:32 addons-864929 crio[816]: time="2025-10-27 19:05:32.657953447Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:adaa598112f171df13e78cd56d399d5cea5583cbab9f70582c179853419f0a95,PodSandboxId:0e930ac960395a1fe60ce33b3d0d23e5074c5bcf2cfcf870738b45425fc094f3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761591555862014978,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e5f3a97-dcd1-44e6-920b-2953ee6ba066,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4aa82535ec10d187f1da703d58159fb09230f78d0581e0f49fbd4acd47482df,PodSandboxId:2e4a1f88f6c72c5d32f4d9fa16c7245440698c2e6c6940465c848ea8e3c1de72,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761591529033470607,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a668ad58-4082-4722-a352-3bd62c30df9b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a32d240f03f8f28e7d4e7a44d8c5ed0615b4f8a512dff263873f19db80541de,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1761591522582012711,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0067ca876ce6c8bdc5053fc40be27170f81485094511709910b16e143a9e2fc4,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1761591521027264059,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd7ab79c70b2b1fe050919ff1dc62a9bd2f43e52e74b896feb06973205b4c86,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1761591519397782799,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c
1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c291b0333c5db7a44ffbeef42ea3e322de328a2db3a212677c23a228d7be117,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1761591515004169043,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba505fec54a4152ff5b929051ca72258b3111a7e5ab73be1ad55ceec66f8fb66,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1761591505993152021,Labels:map[string]string{io.kuber
netes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b250e967b1910669d276d1a5519185d1aadfe512b72ec2b46eb44e2d08b2947c,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3
,State:CONTAINER_RUNNING,CreatedAt:1761591504435510194,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07110c7b3afc08381c95acb068cfe5fd71524933faf9a6815a6f33f2f28c14b5,PodSandboxId:d3fe0c8c9df1bf22576a6f62d4487ebe483778329f044aaf12442f36aefee1c9,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f0
37e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1761591502367296352,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2edb44-d6fd-41c7-aebc-45f7051be9b9,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f241dd9f7205d1dd095138c4d6b056bd582765d0b8e3d8bb89d772bfaae657ad,PodSandboxId:6429ac3aeaf4cf12d6b687f73be67ce1eb08e0da208ef1272ae3514a20ed0c84,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1761591500984769440,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 923a1bc3-3658-4c92-9ac7-f6fb7cc49fdc,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1708c06c7e746239971631d28cc4118fcf7c6f5e0ff884e4193277f8d4fe1045,PodSandboxId:6813db443ad42f30ede5948e121225fd273a9767846892f5abfd1c7e67717754,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761591499124403764,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-9nfvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e133be4d-c9ac-45ee-8523-3197eb5ae1dd,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8f995b816e5743f53a114660fa4536960d4b413e08ae8c78b70a56be317652f,PodSandboxId:41f0fc88d88a67c53a3bd864e17466349b4c7cd2b1545a854eefac2ca9cec7a5,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079
b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761591498953436180,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-t78cg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07e1f13e-a7d4-496f-9f63-f96306459e61,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11101a79fc0739766a9d1c4f24680be46354448f79c0965c8186d69396bd6de8,PodSandboxId:a34c89c3d97f4534833c02b1092cbe693acbc1d81b74de61458ce121608460c7,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspe
ktor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761591486710960910,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-5bx7q,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: ef4b0394-4dee-4b23-bee8-0787117f056f,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47c975912a90508c9994f2e3e844922ac61e9b8efd3e831e0addc0eeb3f78997,PodSandboxId:eb20897f30dfb36e8f3c34ea19074dcb418adfec59e5f0b0a7e7d7001
d52924b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761591461490696586,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-zg4tw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26b73888-1e70-456d-ab70-4392ce52af26,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9580ed2258f1ddc819f6b60b3c2ef2524bf0b58aa70e0aff2439347be11df4e9,PodSandboxId:d0de4be78d27d9e94
647775771a19ac1751580111ae3739d05c71953c1faf14a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761591440550993928,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec5b960-2f51-438a-9968-46e1bea6ddc7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:378ab83eabeec92fc7bf1059eab8071d79c91a8ed0be14239fcda364f18c73e3,PodSandboxId:a87aa3850ab80908c43af3f2bbb9e
ca022489f0530ec2b8899475a9ac087e88d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761591431971845716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-f8dfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ada2d5f-c124-4130-8e4d-f5f6f0d2b856,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\
"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c25a92cc96070b1a3ab5a630802cbe36b41664194b39b44b313e4f4f30c3e83c,PodSandboxId:1549458dc06ee22d63cae83ec65fb1b67f7fe3dd07b0cb035e9908c6a184cd2d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761591431223440952,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5grdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ab29d4-f3af-4942-87b0-5b146ec49fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.
restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:473b2a7d1d8d4e7a553f7f11a2d0384f3251123a5c4549760e65d8ec7b53033c,PodSandboxId:d582ed9677d49ebcc2ef56ec9d4db2cd633d5a4f0d9dbfb7d9840888bee96671,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761591418598524106,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853670e29e0053cd2968e4d42e8dcd57,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2
381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23a81c0c110d4803997f153b474b99fa2c8dd49df03bcd06e0deab806e84e522,PodSandboxId:34da1388827880873125688a0ac800d701cef134bf76ff2b7101d97c3570ac69,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761591418609004347,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4246ed8c9b2f11e40ac4ed620904b3,},Annotations:map[string]string{io.kub
ernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4eba041d7c32a438a0e3146d823021913c8d115f0730a389f250907c87a6d45c,PodSandboxId:af045b669200a98e83828b7038b0ba1371f3f501d38a1aaf2a24eaffe8481851,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761591418585889029,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-864929,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4738620b04d3027787daeded7d8de7c7,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0eb12ce7e2105c2d5af02d2296b784e8c1e6290e76a00061c712a7d7d680f8e,PodSandboxId:c5570e67c7a56294b118428e679ff8b66f3a3e9b719b89e2b9dfb87dfa3f95f5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761591418576290141,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de27a2c807a456567dcafd8f96dd732,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81ddf01e-5604-4d39-86ad-69f61d172974 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:05:32 addons-864929 crio[816]: time="2025-10-27 19:05:32.682435472Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f5af6e63-aa54-4447-ae1c-47c32e800343 name=/runtime.v1.RuntimeService/Version
	Oct 27 19:05:32 addons-864929 crio[816]: time="2025-10-27 19:05:32.682711919Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f5af6e63-aa54-4447-ae1c-47c32e800343 name=/runtime.v1.RuntimeService/Version
	Oct 27 19:05:32 addons-864929 crio[816]: time="2025-10-27 19:05:32.684799687Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a66062a0-9eda-4699-a33b-c460e2b90d69 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:05:32 addons-864929 crio[816]: time="2025-10-27 19:05:32.686232321Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761591932686205722,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:552224,},InodesUsed:&UInt64Value{Value:191,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a66062a0-9eda-4699-a33b-c460e2b90d69 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:05:32 addons-864929 crio[816]: time="2025-10-27 19:05:32.687182455Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=616ad861-ec30-4b52-9a13-607ad996e925 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:05:32 addons-864929 crio[816]: time="2025-10-27 19:05:32.687245252Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=616ad861-ec30-4b52-9a13-607ad996e925 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:05:32 addons-864929 crio[816]: time="2025-10-27 19:05:32.687807504Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:adaa598112f171df13e78cd56d399d5cea5583cbab9f70582c179853419f0a95,PodSandboxId:0e930ac960395a1fe60ce33b3d0d23e5074c5bcf2cfcf870738b45425fc094f3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761591555862014978,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e5f3a97-dcd1-44e6-920b-2953ee6ba066,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4aa82535ec10d187f1da703d58159fb09230f78d0581e0f49fbd4acd47482df,PodSandboxId:2e4a1f88f6c72c5d32f4d9fa16c7245440698c2e6c6940465c848ea8e3c1de72,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761591529033470607,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a668ad58-4082-4722-a352-3bd62c30df9b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a32d240f03f8f28e7d4e7a44d8c5ed0615b4f8a512dff263873f19db80541de,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1761591522582012711,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0067ca876ce6c8bdc5053fc40be27170f81485094511709910b16e143a9e2fc4,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1761591521027264059,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd7ab79c70b2b1fe050919ff1dc62a9bd2f43e52e74b896feb06973205b4c86,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1761591519397782799,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c
1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c291b0333c5db7a44ffbeef42ea3e322de328a2db3a212677c23a228d7be117,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1761591515004169043,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba505fec54a4152ff5b929051ca72258b3111a7e5ab73be1ad55ceec66f8fb66,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1761591505993152021,Labels:map[string]string{io.kuber
netes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b250e967b1910669d276d1a5519185d1aadfe512b72ec2b46eb44e2d08b2947c,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3
,State:CONTAINER_RUNNING,CreatedAt:1761591504435510194,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07110c7b3afc08381c95acb068cfe5fd71524933faf9a6815a6f33f2f28c14b5,PodSandboxId:d3fe0c8c9df1bf22576a6f62d4487ebe483778329f044aaf12442f36aefee1c9,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f0
37e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1761591502367296352,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2edb44-d6fd-41c7-aebc-45f7051be9b9,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f241dd9f7205d1dd095138c4d6b056bd582765d0b8e3d8bb89d772bfaae657ad,PodSandboxId:6429ac3aeaf4cf12d6b687f73be67ce1eb08e0da208ef1272ae3514a20ed0c84,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1761591500984769440,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 923a1bc3-3658-4c92-9ac7-f6fb7cc49fdc,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1708c06c7e746239971631d28cc4118fcf7c6f5e0ff884e4193277f8d4fe1045,PodSandboxId:6813db443ad42f30ede5948e121225fd273a9767846892f5abfd1c7e67717754,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761591499124403764,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-9nfvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e133be4d-c9ac-45ee-8523-3197eb5ae1dd,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8f995b816e5743f53a114660fa4536960d4b413e08ae8c78b70a56be317652f,PodSandboxId:41f0fc88d88a67c53a3bd864e17466349b4c7cd2b1545a854eefac2ca9cec7a5,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079
b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761591498953436180,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-t78cg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07e1f13e-a7d4-496f-9f63-f96306459e61,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11101a79fc0739766a9d1c4f24680be46354448f79c0965c8186d69396bd6de8,PodSandboxId:a34c89c3d97f4534833c02b1092cbe693acbc1d81b74de61458ce121608460c7,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspe
ktor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761591486710960910,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-5bx7q,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: ef4b0394-4dee-4b23-bee8-0787117f056f,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47c975912a90508c9994f2e3e844922ac61e9b8efd3e831e0addc0eeb3f78997,PodSandboxId:eb20897f30dfb36e8f3c34ea19074dcb418adfec59e5f0b0a7e7d7001
d52924b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761591461490696586,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-zg4tw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26b73888-1e70-456d-ab70-4392ce52af26,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9580ed2258f1ddc819f6b60b3c2ef2524bf0b58aa70e0aff2439347be11df4e9,PodSandboxId:d0de4be78d27d9e94
647775771a19ac1751580111ae3739d05c71953c1faf14a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761591440550993928,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec5b960-2f51-438a-9968-46e1bea6ddc7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:378ab83eabeec92fc7bf1059eab8071d79c91a8ed0be14239fcda364f18c73e3,PodSandboxId:a87aa3850ab80908c43af3f2bbb9e
ca022489f0530ec2b8899475a9ac087e88d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761591431971845716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-f8dfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ada2d5f-c124-4130-8e4d-f5f6f0d2b856,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\
"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c25a92cc96070b1a3ab5a630802cbe36b41664194b39b44b313e4f4f30c3e83c,PodSandboxId:1549458dc06ee22d63cae83ec65fb1b67f7fe3dd07b0cb035e9908c6a184cd2d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761591431223440952,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5grdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ab29d4-f3af-4942-87b0-5b146ec49fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.
restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:473b2a7d1d8d4e7a553f7f11a2d0384f3251123a5c4549760e65d8ec7b53033c,PodSandboxId:d582ed9677d49ebcc2ef56ec9d4db2cd633d5a4f0d9dbfb7d9840888bee96671,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761591418598524106,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853670e29e0053cd2968e4d42e8dcd57,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2
381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23a81c0c110d4803997f153b474b99fa2c8dd49df03bcd06e0deab806e84e522,PodSandboxId:34da1388827880873125688a0ac800d701cef134bf76ff2b7101d97c3570ac69,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761591418609004347,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4246ed8c9b2f11e40ac4ed620904b3,},Annotations:map[string]string{io.kub
ernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4eba041d7c32a438a0e3146d823021913c8d115f0730a389f250907c87a6d45c,PodSandboxId:af045b669200a98e83828b7038b0ba1371f3f501d38a1aaf2a24eaffe8481851,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761591418585889029,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-864929,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4738620b04d3027787daeded7d8de7c7,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0eb12ce7e2105c2d5af02d2296b784e8c1e6290e76a00061c712a7d7d680f8e,PodSandboxId:c5570e67c7a56294b118428e679ff8b66f3a3e9b719b89e2b9dfb87dfa3f95f5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761591418576290141,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de27a2c807a456567dcafd8f96dd732,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=616ad861-ec30-4b52-9a13-607ad996e925 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:05:32 addons-864929 crio[816]: time="2025-10-27 19:05:32.733065227Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ed5a3a1b-fd5c-4380-bf5e-dd43a6894b97 name=/runtime.v1.RuntimeService/Version
	Oct 27 19:05:32 addons-864929 crio[816]: time="2025-10-27 19:05:32.733136325Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ed5a3a1b-fd5c-4380-bf5e-dd43a6894b97 name=/runtime.v1.RuntimeService/Version
	Oct 27 19:05:32 addons-864929 crio[816]: time="2025-10-27 19:05:32.734060558Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8a6b364b-9f1b-43b0-9fcd-160930974ba3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:05:32 addons-864929 crio[816]: time="2025-10-27 19:05:32.736500926Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761591932736463945,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:552224,},InodesUsed:&UInt64Value{Value:191,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a6b364b-9f1b-43b0-9fcd-160930974ba3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:05:32 addons-864929 crio[816]: time="2025-10-27 19:05:32.739922426Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e66f327-f7d6-41a5-89c6-f95d5f550268 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:05:32 addons-864929 crio[816]: time="2025-10-27 19:05:32.740053972Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e66f327-f7d6-41a5-89c6-f95d5f550268 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:05:32 addons-864929 crio[816]: time="2025-10-27 19:05:32.740845778Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:adaa598112f171df13e78cd56d399d5cea5583cbab9f70582c179853419f0a95,PodSandboxId:0e930ac960395a1fe60ce33b3d0d23e5074c5bcf2cfcf870738b45425fc094f3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761591555862014978,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e5f3a97-dcd1-44e6-920b-2953ee6ba066,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4aa82535ec10d187f1da703d58159fb09230f78d0581e0f49fbd4acd47482df,PodSandboxId:2e4a1f88f6c72c5d32f4d9fa16c7245440698c2e6c6940465c848ea8e3c1de72,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761591529033470607,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a668ad58-4082-4722-a352-3bd62c30df9b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a32d240f03f8f28e7d4e7a44d8c5ed0615b4f8a512dff263873f19db80541de,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1761591522582012711,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0067ca876ce6c8bdc5053fc40be27170f81485094511709910b16e143a9e2fc4,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1761591521027264059,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd7ab79c70b2b1fe050919ff1dc62a9bd2f43e52e74b896feb06973205b4c86,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1761591519397782799,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c
1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c291b0333c5db7a44ffbeef42ea3e322de328a2db3a212677c23a228d7be117,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1761591515004169043,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba505fec54a4152ff5b929051ca72258b3111a7e5ab73be1ad55ceec66f8fb66,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1761591505993152021,Labels:map[string]string{io.kuber
netes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b250e967b1910669d276d1a5519185d1aadfe512b72ec2b46eb44e2d08b2947c,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3
,State:CONTAINER_RUNNING,CreatedAt:1761591504435510194,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07110c7b3afc08381c95acb068cfe5fd71524933faf9a6815a6f33f2f28c14b5,PodSandboxId:d3fe0c8c9df1bf22576a6f62d4487ebe483778329f044aaf12442f36aefee1c9,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f0
37e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1761591502367296352,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2edb44-d6fd-41c7-aebc-45f7051be9b9,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f241dd9f7205d1dd095138c4d6b056bd582765d0b8e3d8bb89d772bfaae657ad,PodSandboxId:6429ac3aeaf4cf12d6b687f73be67ce1eb08e0da208ef1272ae3514a20ed0c84,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1761591500984769440,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 923a1bc3-3658-4c92-9ac7-f6fb7cc49fdc,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1708c06c7e746239971631d28cc4118fcf7c6f5e0ff884e4193277f8d4fe1045,PodSandboxId:6813db443ad42f30ede5948e121225fd273a9767846892f5abfd1c7e67717754,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761591499124403764,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-9nfvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e133be4d-c9ac-45ee-8523-3197eb5ae1dd,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8f995b816e5743f53a114660fa4536960d4b413e08ae8c78b70a56be317652f,PodSandboxId:41f0fc88d88a67c53a3bd864e17466349b4c7cd2b1545a854eefac2ca9cec7a5,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079
b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761591498953436180,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-t78cg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07e1f13e-a7d4-496f-9f63-f96306459e61,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11101a79fc0739766a9d1c4f24680be46354448f79c0965c8186d69396bd6de8,PodSandboxId:a34c89c3d97f4534833c02b1092cbe693acbc1d81b74de61458ce121608460c7,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspe
ktor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761591486710960910,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-5bx7q,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: ef4b0394-4dee-4b23-bee8-0787117f056f,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47c975912a90508c9994f2e3e844922ac61e9b8efd3e831e0addc0eeb3f78997,PodSandboxId:eb20897f30dfb36e8f3c34ea19074dcb418adfec59e5f0b0a7e7d7001
d52924b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761591461490696586,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-zg4tw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26b73888-1e70-456d-ab70-4392ce52af26,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9580ed2258f1ddc819f6b60b3c2ef2524bf0b58aa70e0aff2439347be11df4e9,PodSandboxId:d0de4be78d27d9e94
647775771a19ac1751580111ae3739d05c71953c1faf14a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761591440550993928,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec5b960-2f51-438a-9968-46e1bea6ddc7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:378ab83eabeec92fc7bf1059eab8071d79c91a8ed0be14239fcda364f18c73e3,PodSandboxId:a87aa3850ab80908c43af3f2bbb9e
ca022489f0530ec2b8899475a9ac087e88d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761591431971845716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-f8dfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ada2d5f-c124-4130-8e4d-f5f6f0d2b856,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\
"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c25a92cc96070b1a3ab5a630802cbe36b41664194b39b44b313e4f4f30c3e83c,PodSandboxId:1549458dc06ee22d63cae83ec65fb1b67f7fe3dd07b0cb035e9908c6a184cd2d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761591431223440952,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5grdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ab29d4-f3af-4942-87b0-5b146ec49fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.
restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:473b2a7d1d8d4e7a553f7f11a2d0384f3251123a5c4549760e65d8ec7b53033c,PodSandboxId:d582ed9677d49ebcc2ef56ec9d4db2cd633d5a4f0d9dbfb7d9840888bee96671,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761591418598524106,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853670e29e0053cd2968e4d42e8dcd57,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2
381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23a81c0c110d4803997f153b474b99fa2c8dd49df03bcd06e0deab806e84e522,PodSandboxId:34da1388827880873125688a0ac800d701cef134bf76ff2b7101d97c3570ac69,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761591418609004347,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4246ed8c9b2f11e40ac4ed620904b3,},Annotations:map[string]string{io.kub
ernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4eba041d7c32a438a0e3146d823021913c8d115f0730a389f250907c87a6d45c,PodSandboxId:af045b669200a98e83828b7038b0ba1371f3f501d38a1aaf2a24eaffe8481851,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761591418585889029,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-864929,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4738620b04d3027787daeded7d8de7c7,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0eb12ce7e2105c2d5af02d2296b784e8c1e6290e76a00061c712a7d7d680f8e,PodSandboxId:c5570e67c7a56294b118428e679ff8b66f3a3e9b719b89e2b9dfb87dfa3f95f5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761591418576290141,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de27a2c807a456567dcafd8f96dd732,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7e66f327-f7d6-41a5-89c6-f95d5f550268 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:05:32 addons-864929 crio[816]: time="2025-10-27 19:05:32.779810121Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2ce9997f-70c9-4010-ae8b-47efe5350765 name=/runtime.v1.RuntimeService/Version
	Oct 27 19:05:32 addons-864929 crio[816]: time="2025-10-27 19:05:32.780053351Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2ce9997f-70c9-4010-ae8b-47efe5350765 name=/runtime.v1.RuntimeService/Version
	Oct 27 19:05:32 addons-864929 crio[816]: time="2025-10-27 19:05:32.781322880Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f6f50cc9-f524-4a24-ad3b-c231b7ebd17e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:05:32 addons-864929 crio[816]: time="2025-10-27 19:05:32.782639464Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761591932782567035,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:552224,},InodesUsed:&UInt64Value{Value:191,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f6f50cc9-f524-4a24-ad3b-c231b7ebd17e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:05:32 addons-864929 crio[816]: time="2025-10-27 19:05:32.783138011Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d44b211b-5db9-48c9-ab4c-33cdf6dcca75 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:05:32 addons-864929 crio[816]: time="2025-10-27 19:05:32.783197554Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d44b211b-5db9-48c9-ab4c-33cdf6dcca75 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:05:32 addons-864929 crio[816]: time="2025-10-27 19:05:32.783807297Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:adaa598112f171df13e78cd56d399d5cea5583cbab9f70582c179853419f0a95,PodSandboxId:0e930ac960395a1fe60ce33b3d0d23e5074c5bcf2cfcf870738b45425fc094f3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761591555862014978,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e5f3a97-dcd1-44e6-920b-2953ee6ba066,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4aa82535ec10d187f1da703d58159fb09230f78d0581e0f49fbd4acd47482df,PodSandboxId:2e4a1f88f6c72c5d32f4d9fa16c7245440698c2e6c6940465c848ea8e3c1de72,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761591529033470607,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a668ad58-4082-4722-a352-3bd62c30df9b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a32d240f03f8f28e7d4e7a44d8c5ed0615b4f8a512dff263873f19db80541de,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1761591522582012711,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0067ca876ce6c8bdc5053fc40be27170f81485094511709910b16e143a9e2fc4,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1761591521027264059,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd7ab79c70b2b1fe050919ff1dc62a9bd2f43e52e74b896feb06973205b4c86,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1761591519397782799,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c
1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c291b0333c5db7a44ffbeef42ea3e322de328a2db3a212677c23a228d7be117,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1761591515004169043,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba505fec54a4152ff5b929051ca72258b3111a7e5ab73be1ad55ceec66f8fb66,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1761591505993152021,Labels:map[string]string{io.kuber
netes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b250e967b1910669d276d1a5519185d1aadfe512b72ec2b46eb44e2d08b2947c,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3
,State:CONTAINER_RUNNING,CreatedAt:1761591504435510194,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07110c7b3afc08381c95acb068cfe5fd71524933faf9a6815a6f33f2f28c14b5,PodSandboxId:d3fe0c8c9df1bf22576a6f62d4487ebe483778329f044aaf12442f36aefee1c9,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f0
37e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1761591502367296352,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2edb44-d6fd-41c7-aebc-45f7051be9b9,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f241dd9f7205d1dd095138c4d6b056bd582765d0b8e3d8bb89d772bfaae657ad,PodSandboxId:6429ac3aeaf4cf12d6b687f73be67ce1eb08e0da208ef1272ae3514a20ed0c84,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1761591500984769440,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 923a1bc3-3658-4c92-9ac7-f6fb7cc49fdc,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1708c06c7e746239971631d28cc4118fcf7c6f5e0ff884e4193277f8d4fe1045,PodSandboxId:6813db443ad42f30ede5948e121225fd273a9767846892f5abfd1c7e67717754,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761591499124403764,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-9nfvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e133be4d-c9ac-45ee-8523-3197eb5ae1dd,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8f995b816e5743f53a114660fa4536960d4b413e08ae8c78b70a56be317652f,PodSandboxId:41f0fc88d88a67c53a3bd864e17466349b4c7cd2b1545a854eefac2ca9cec7a5,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079
b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761591498953436180,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-t78cg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07e1f13e-a7d4-496f-9f63-f96306459e61,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11101a79fc0739766a9d1c4f24680be46354448f79c0965c8186d69396bd6de8,PodSandboxId:a34c89c3d97f4534833c02b1092cbe693acbc1d81b74de61458ce121608460c7,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspe
ktor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761591486710960910,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-5bx7q,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: ef4b0394-4dee-4b23-bee8-0787117f056f,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47c975912a90508c9994f2e3e844922ac61e9b8efd3e831e0addc0eeb3f78997,PodSandboxId:eb20897f30dfb36e8f3c34ea19074dcb418adfec59e5f0b0a7e7d7001
d52924b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761591461490696586,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-zg4tw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26b73888-1e70-456d-ab70-4392ce52af26,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9580ed2258f1ddc819f6b60b3c2ef2524bf0b58aa70e0aff2439347be11df4e9,PodSandboxId:d0de4be78d27d9e94
647775771a19ac1751580111ae3739d05c71953c1faf14a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761591440550993928,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec5b960-2f51-438a-9968-46e1bea6ddc7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:378ab83eabeec92fc7bf1059eab8071d79c91a8ed0be14239fcda364f18c73e3,PodSandboxId:a87aa3850ab80908c43af3f2bbb9e
ca022489f0530ec2b8899475a9ac087e88d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761591431971845716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-f8dfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ada2d5f-c124-4130-8e4d-f5f6f0d2b856,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\
"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c25a92cc96070b1a3ab5a630802cbe36b41664194b39b44b313e4f4f30c3e83c,PodSandboxId:1549458dc06ee22d63cae83ec65fb1b67f7fe3dd07b0cb035e9908c6a184cd2d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761591431223440952,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5grdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ab29d4-f3af-4942-87b0-5b146ec49fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.
restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:473b2a7d1d8d4e7a553f7f11a2d0384f3251123a5c4549760e65d8ec7b53033c,PodSandboxId:d582ed9677d49ebcc2ef56ec9d4db2cd633d5a4f0d9dbfb7d9840888bee96671,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761591418598524106,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853670e29e0053cd2968e4d42e8dcd57,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2
381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23a81c0c110d4803997f153b474b99fa2c8dd49df03bcd06e0deab806e84e522,PodSandboxId:34da1388827880873125688a0ac800d701cef134bf76ff2b7101d97c3570ac69,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761591418609004347,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4246ed8c9b2f11e40ac4ed620904b3,},Annotations:map[string]string{io.kub
ernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4eba041d7c32a438a0e3146d823021913c8d115f0730a389f250907c87a6d45c,PodSandboxId:af045b669200a98e83828b7038b0ba1371f3f501d38a1aaf2a24eaffe8481851,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761591418585889029,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-864929,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4738620b04d3027787daeded7d8de7c7,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0eb12ce7e2105c2d5af02d2296b784e8c1e6290e76a00061c712a7d7d680f8e,PodSandboxId:c5570e67c7a56294b118428e679ff8b66f3a3e9b719b89e2b9dfb87dfa3f95f5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761591418576290141,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de27a2c807a456567dcafd8f96dd732,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d44b211b-5db9-48c9-ab4c-33cdf6dcca75 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	adaa598112f17       docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22                                              6 minutes ago       Running             nginx                                    0                   0e930ac960395       nginx
	c4aa82535ec10       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          6 minutes ago       Running             busybox                                  0                   2e4a1f88f6c72       busybox
	9a32d240f03f8       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          6 minutes ago       Running             csi-snapshotter                          0                   3624eec81ad83       csi-hostpathplugin-2kk6q
	0067ca876ce6c       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          6 minutes ago       Running             csi-provisioner                          0                   3624eec81ad83       csi-hostpathplugin-2kk6q
	7bd7ab79c70b2       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            6 minutes ago       Running             liveness-probe                           0                   3624eec81ad83       csi-hostpathplugin-2kk6q
	9c291b0333c5d       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           6 minutes ago       Running             hostpath                                 0                   3624eec81ad83       csi-hostpathplugin-2kk6q
	ba505fec54a41       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                7 minutes ago       Running             node-driver-registrar                    0                   3624eec81ad83       csi-hostpathplugin-2kk6q
	b250e967b1910       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   7 minutes ago       Running             csi-external-health-monitor-controller   0                   3624eec81ad83       csi-hostpathplugin-2kk6q
	07110c7b3afc0       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              7 minutes ago       Running             csi-resizer                              0                   d3fe0c8c9df1b       csi-hostpath-resizer-0
	f241dd9f7205d       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             7 minutes ago       Running             csi-attacher                             0                   6429ac3aeaf4c       csi-hostpath-attacher-0
	1708c06c7e746       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   6813db443ad42       snapshot-controller-7d9fbc56b8-9nfvf
	a8f995b816e57       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   41f0fc88d88a6       snapshot-controller-7d9fbc56b8-t78cg
	11101a79fc073       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            7 minutes ago       Running             gadget                                   0                   a34c89c3d97f4       gadget-5bx7q
	47c975912a905       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     7 minutes ago       Running             amd-gpu-device-plugin                    0                   eb20897f30dfb       amd-gpu-device-plugin-zg4tw
	9580ed2258f1d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             8 minutes ago       Running             storage-provisioner                      0                   d0de4be78d27d       storage-provisioner
	378ab83eabeec       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             8 minutes ago       Running             coredns                                  0                   a87aa3850ab80       coredns-66bc5c9577-f8dfl
	c25a92cc96070       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             8 minutes ago       Running             kube-proxy                               0                   1549458dc06ee       kube-proxy-5grdt
	23a81c0c110d4       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             8 minutes ago       Running             kube-scheduler                           0                   34da138882788       kube-scheduler-addons-864929
	473b2a7d1d8d4       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             8 minutes ago       Running             etcd                                     0                   d582ed9677d49       etcd-addons-864929
	4eba041d7c32a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             8 minutes ago       Running             kube-controller-manager                  0                   af045b669200a       kube-controller-manager-addons-864929
	a0eb12ce7e210       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             8 minutes ago       Running             kube-apiserver                           0                   c5570e67c7a56       kube-apiserver-addons-864929
	
	
	==> coredns [378ab83eabeec92fc7bf1059eab8071d79c91a8ed0be14239fcda364f18c73e3] <==
	[INFO] 10.244.0.22:33406 - 12503 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000417521s
	[INFO] 10.244.0.22:45442 - 11795 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000290954s
	[INFO] 10.244.0.22:33406 - 7960 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000335707s
	[INFO] 10.244.0.22:45442 - 20036 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000129949s
	[INFO] 10.244.0.22:33406 - 4331 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000324272s
	[INFO] 10.244.0.22:45442 - 44888 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000091189s
	[INFO] 10.244.0.22:45442 - 15636 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000099308s
	[INFO] 10.244.0.22:33406 - 30775 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000370838s
	[INFO] 10.244.0.22:45442 - 19061 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000449213s
	[INFO] 10.244.0.22:45442 - 45959 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000473322s
	[INFO] 10.244.0.22:33406 - 32269 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000888744s
	[INFO] 10.244.0.22:38947 - 62796 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000167539s
	[INFO] 10.244.0.22:38947 - 23906 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000087006s
	[INFO] 10.244.0.22:38947 - 43877 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000080495s
	[INFO] 10.244.0.22:38947 - 21432 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000077684s
	[INFO] 10.244.0.22:38947 - 62211 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000065991s
	[INFO] 10.244.0.22:38947 - 59955 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000118183s
	[INFO] 10.244.0.22:50942 - 2529 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.001444462s
	[INFO] 10.244.0.22:38947 - 39721 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000293034s
	[INFO] 10.244.0.22:50942 - 14095 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000123595s
	[INFO] 10.244.0.22:50942 - 7851 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000169449s
	[INFO] 10.244.0.22:50942 - 36653 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000129196s
	[INFO] 10.244.0.22:50942 - 46135 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000136478s
	[INFO] 10.244.0.22:50942 - 14917 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000138736s
	[INFO] 10.244.0.22:50942 - 51502 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000077287s
	
	
	==> describe nodes <==
	Name:               addons-864929
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-864929
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=addons-864929
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T18_57_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-864929
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-864929"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 18:57:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-864929
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 19:05:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 18:59:40 +0000   Mon, 27 Oct 2025 18:56:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 18:59:40 +0000   Mon, 27 Oct 2025 18:56:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 18:59:40 +0000   Mon, 27 Oct 2025 18:56:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 18:59:40 +0000   Mon, 27 Oct 2025 18:57:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.216
	  Hostname:    addons-864929
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 780db33d391d49adb77a2a509bc06274
	  System UUID:                780db33d-391d-49ad-b77a-2a509bc06274
	  Boot ID:                    6fa66b3e-a553-40c9-b7f0-71dd11966be5
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m47s
	  default                     hello-world-app-5d498dc89-wmhrh          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  default                     task-pv-pod                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  default                     test-local-path                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m12s
	  gadget                      gadget-5bx7q                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m15s
	  kube-system                 amd-gpu-device-plugin-zg4tw              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m20s
	  kube-system                 coredns-66bc5c9577-f8dfl                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m23s
	  kube-system                 csi-hostpath-attacher-0                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m12s
	  kube-system                 csi-hostpath-resizer-0                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m12s
	  kube-system                 csi-hostpathplugin-2kk6q                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m12s
	  kube-system                 etcd-addons-864929                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m28s
	  kube-system                 kube-apiserver-addons-864929             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m29s
	  kube-system                 kube-controller-manager-addons-864929    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m28s
	  kube-system                 kube-proxy-5grdt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m24s
	  kube-system                 kube-scheduler-addons-864929             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m28s
	  kube-system                 snapshot-controller-7d9fbc56b8-9nfvf     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m13s
	  kube-system                 snapshot-controller-7d9fbc56b8-t78cg     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m13s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m20s  kube-proxy       
	  Normal  Starting                 8m28s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m28s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m28s  kubelet          Node addons-864929 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m28s  kubelet          Node addons-864929 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m28s  kubelet          Node addons-864929 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m27s  kubelet          Node addons-864929 status is now: NodeReady
	  Normal  RegisteredNode           8m24s  node-controller  Node addons-864929 event: Registered Node addons-864929 in Controller
	
	
	==> dmesg <==
	[  +1.035983] kauditd_printk_skb: 321 callbacks suppressed
	[  +0.074749] kauditd_printk_skb: 215 callbacks suppressed
	[  +0.252144] kauditd_printk_skb: 390 callbacks suppressed
	[ +13.923984] kauditd_printk_skb: 64 callbacks suppressed
	[  +7.170668] kauditd_printk_skb: 20 callbacks suppressed
	[  +6.426658] kauditd_printk_skb: 32 callbacks suppressed
	[Oct27 18:58] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.493718] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.181992] kauditd_printk_skb: 32 callbacks suppressed
	[  +1.064652] kauditd_printk_skb: 94 callbacks suppressed
	[  +0.000029] kauditd_printk_skb: 37 callbacks suppressed
	[  +0.654510] kauditd_printk_skb: 156 callbacks suppressed
	[  +5.691951] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.014421] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.186188] kauditd_printk_skb: 26 callbacks suppressed
	[ +13.043727] kauditd_printk_skb: 47 callbacks suppressed
	[Oct27 18:59] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.809040] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.269736] kauditd_printk_skb: 141 callbacks suppressed
	[  +0.027386] kauditd_printk_skb: 85 callbacks suppressed
	[  +5.740720] kauditd_printk_skb: 139 callbacks suppressed
	[ +11.255527] kauditd_printk_skb: 58 callbacks suppressed
	[Oct27 19:01] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.261320] kauditd_printk_skb: 46 callbacks suppressed
	[Oct27 19:02] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [473b2a7d1d8d4e7a553f7f11a2d0384f3251123a5c4549760e65d8ec7b53033c] <==
	{"level":"info","ts":"2025-10-27T18:57:56.410315Z","caller":"traceutil/trace.go:172","msg":"trace[229503704] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:948; }","duration":"134.367662ms","start":"2025-10-27T18:57:56.275938Z","end":"2025-10-27T18:57:56.410305Z","steps":["trace[229503704] 'agreement among raft nodes before linearized reading'  (duration: 132.957173ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:57:56.410060Z","caller":"traceutil/trace.go:172","msg":"trace[891786162] linearizableReadLoop","detail":"{readStateIndex:975; appliedIndex:975; }","duration":"131.983723ms","start":"2025-10-27T18:57:56.275942Z","end":"2025-10-27T18:57:56.407926Z","steps":["trace[891786162] 'read index received'  (duration: 131.979399ms)","trace[891786162] 'applied index is now lower than readState.Index'  (duration: 3.544µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T18:57:56.412263Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.94226ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T18:57:56.412309Z","caller":"traceutil/trace.go:172","msg":"trace[262639361] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:948; }","duration":"119.995829ms","start":"2025-10-27T18:57:56.292305Z","end":"2025-10-27T18:57:56.412301Z","steps":["trace[262639361] 'agreement among raft nodes before linearized reading'  (duration: 119.922856ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:58:11.125078Z","caller":"traceutil/trace.go:172","msg":"trace[493772090] linearizableReadLoop","detail":"{readStateIndex:1016; appliedIndex:1016; }","duration":"108.067998ms","start":"2025-10-27T18:58:11.016880Z","end":"2025-10-27T18:58:11.124948Z","steps":["trace[493772090] 'read index received'  (duration: 108.0588ms)","trace[493772090] 'applied index is now lower than readState.Index'  (duration: 7.728µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T18:58:11.125323Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.422079ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" limit:1 ","response":"range_response_count:1 size:3021"}
	{"level":"info","ts":"2025-10-27T18:58:11.125351Z","caller":"traceutil/trace.go:172","msg":"trace[2111825061] range","detail":"{range_begin:/registry/configmaps/kube-system/extension-apiserver-authentication; range_end:; response_count:1; response_revision:984; }","duration":"108.467942ms","start":"2025-10-27T18:58:11.016877Z","end":"2025-10-27T18:58:11.125345Z","steps":["trace[2111825061] 'agreement among raft nodes before linearized reading'  (duration: 108.282493ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:58:11.125763Z","caller":"traceutil/trace.go:172","msg":"trace[1553925984] transaction","detail":"{read_only:false; response_revision:985; number_of_response:1; }","duration":"186.294868ms","start":"2025-10-27T18:58:10.939461Z","end":"2025-10-27T18:58:11.125756Z","steps":["trace[1553925984] 'process raft request'  (duration: 186.212532ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:58:11.142309Z","caller":"traceutil/trace.go:172","msg":"trace[839786025] transaction","detail":"{read_only:false; response_revision:986; number_of_response:1; }","duration":"138.309645ms","start":"2025-10-27T18:58:11.003986Z","end":"2025-10-27T18:58:11.142296Z","steps":["trace[839786025] 'process raft request'  (duration: 138.098647ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:58:13.531058Z","caller":"traceutil/trace.go:172","msg":"trace[30205562] linearizableReadLoop","detail":"{readStateIndex:1025; appliedIndex:1025; }","duration":"254.599969ms","start":"2025-10-27T18:58:13.276437Z","end":"2025-10-27T18:58:13.531037Z","steps":["trace[30205562] 'read index received'  (duration: 254.54701ms)","trace[30205562] 'applied index is now lower than readState.Index'  (duration: 3.554µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T18:58:13.531448Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"255.007373ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T18:58:13.531551Z","caller":"traceutil/trace.go:172","msg":"trace[1564347891] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:993; }","duration":"255.121817ms","start":"2025-10-27T18:58:13.276412Z","end":"2025-10-27T18:58:13.531534Z","steps":["trace[1564347891] 'agreement among raft nodes before linearized reading'  (duration: 254.972595ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:58:13.531509Z","caller":"traceutil/trace.go:172","msg":"trace[1892686575] transaction","detail":"{read_only:false; response_revision:994; number_of_response:1; }","duration":"391.579159ms","start":"2025-10-27T18:58:13.139914Z","end":"2025-10-27T18:58:13.531493Z","steps":["trace[1892686575] 'process raft request'  (duration: 391.354515ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T18:58:13.531824Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-27T18:58:13.139894Z","time spent":"391.808403ms","remote":"127.0.0.1:52894","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:985 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-10-27T18:58:13.532035Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.107659ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T18:58:13.532079Z","caller":"traceutil/trace.go:172","msg":"trace[2038100128] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:994; }","duration":"126.149586ms","start":"2025-10-27T18:58:13.405923Z","end":"2025-10-27T18:58:13.532072Z","steps":["trace[2038100128] 'agreement among raft nodes before linearized reading'  (duration: 126.101237ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T18:58:13.531900Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"238.553844ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T18:58:13.532339Z","caller":"traceutil/trace.go:172","msg":"trace[854445731] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:994; }","duration":"239.005326ms","start":"2025-10-27T18:58:13.293326Z","end":"2025-10-27T18:58:13.532332Z","steps":["trace[854445731] 'agreement among raft nodes before linearized reading'  (duration: 238.54211ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T18:58:34.711927Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.634065ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T18:58:34.712061Z","caller":"traceutil/trace.go:172","msg":"trace[895249490] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1119; }","duration":"113.796373ms","start":"2025-10-27T18:58:34.598253Z","end":"2025-10-27T18:58:34.712049Z","steps":["trace[895249490] 'range keys from in-memory index tree'  (duration: 113.587415ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T18:58:38.243222Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"193.660222ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T18:58:38.243481Z","caller":"traceutil/trace.go:172","msg":"trace[698536657] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses; range_end:; response_count:0; response_revision:1136; }","duration":"193.931351ms","start":"2025-10-27T18:58:38.049536Z","end":"2025-10-27T18:58:38.243467Z","steps":["trace[698536657] 'range keys from in-memory index tree'  (duration: 193.593238ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:58:42.492190Z","caller":"traceutil/trace.go:172","msg":"trace[1973944569] transaction","detail":"{read_only:false; response_revision:1165; number_of_response:1; }","duration":"119.999969ms","start":"2025-10-27T18:58:42.372178Z","end":"2025-10-27T18:58:42.492178Z","steps":["trace[1973944569] 'process raft request'  (duration: 119.899102ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:59:11.698647Z","caller":"traceutil/trace.go:172","msg":"trace[361898695] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1348; }","duration":"135.106526ms","start":"2025-10-27T18:59:11.563481Z","end":"2025-10-27T18:59:11.698587Z","steps":["trace[361898695] 'process raft request'  (duration: 135.018245ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:59:14.141155Z","caller":"traceutil/trace.go:172","msg":"trace[837123529] transaction","detail":"{read_only:false; response_revision:1379; number_of_response:1; }","duration":"206.995462ms","start":"2025-10-27T18:59:13.934147Z","end":"2025-10-27T18:59:14.141142Z","steps":["trace[837123529] 'process raft request'  (duration: 206.907826ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:05:33 up 9 min,  0 users,  load average: 0.39, 0.87, 0.70
	Linux addons-864929 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [a0eb12ce7e2105c2d5af02d2296b784e8c1e6290e76a00061c712a7d7d680f8e] <==
	W1027 18:57:22.323729       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1027 18:57:22.344896       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1027 18:57:23.882919       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.107.65.231"}
	W1027 18:57:39.184847       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1027 18:57:39.206284       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1027 18:57:39.243149       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1027 18:57:39.253377       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1027 18:58:11.250340       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.9.84:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.9.84:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.9.84:443: connect: connection refused" logger="UnhandledError"
	W1027 18:58:11.250699       1 handler_proxy.go:99] no RequestInfo found in the context
	E1027 18:58:11.250761       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1027 18:58:11.256876       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.9.84:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.9.84:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.9.84:443: connect: connection refused" logger="UnhandledError"
	E1027 18:58:11.257462       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.9.84:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.9.84:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.9.84:443: connect: connection refused" logger="UnhandledError"
	E1027 18:58:11.269028       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.9.84:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.9.84:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.9.84:443: connect: connection refused" logger="UnhandledError"
	E1027 18:58:11.311326       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.9.84:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.9.84:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.9.84:443: connect: connection refused" logger="UnhandledError"
	I1027 18:58:11.522386       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1027 18:58:56.253891       1 conn.go:339] Error on socket receive: read tcp 192.168.39.216:8443->192.168.39.1:59114: use of closed network connection
	E1027 18:58:56.463232       1 conn.go:339] Error on socket receive: read tcp 192.168.39.216:8443->192.168.39.1:59134: use of closed network connection
	I1027 18:59:05.726497       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.152.62"}
	I1027 18:59:12.082722       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1027 18:59:12.280737       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1027 18:59:12.320355       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.254.157"}
	I1027 19:01:37.350902       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.103.64"}
	
	
	==> kube-controller-manager [4eba041d7c32a438a0e3146d823021913c8d115f0730a389f250907c87a6d45c] <==
	I1027 18:57:09.197029       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 18:57:09.197199       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 18:57:09.198769       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1027 18:57:09.199421       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1027 18:57:09.199859       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 18:57:09.202220       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 18:57:09.202262       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 18:57:09.204812       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-864929" podCIDRs=["10.244.0.0/24"]
	I1027 18:57:09.205058       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 18:57:09.205335       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 18:57:09.209450       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	E1027 18:57:17.574464       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1027 18:57:39.171065       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1027 18:57:39.171215       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1027 18:57:39.171282       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1027 18:57:39.201480       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1027 18:57:39.217694       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1027 18:57:39.272250       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 18:57:39.320974       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1027 18:58:09.292080       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1027 18:58:09.340459       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1027 18:59:09.765540       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I1027 18:59:29.759701       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I1027 18:59:41.557310       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I1027 19:01:52.106852       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	
	
	==> kube-proxy [c25a92cc96070b1a3ab5a630802cbe36b41664194b39b44b313e4f4f30c3e83c] <==
	I1027 18:57:11.964888       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 18:57:12.066455       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 18:57:12.066978       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.216"]
	E1027 18:57:12.067747       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 18:57:12.441037       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1027 18:57:12.441091       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1027 18:57:12.441116       1 server_linux.go:132] "Using iptables Proxier"
	I1027 18:57:12.549755       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 18:57:12.551449       1 server.go:527] "Version info" version="v1.34.1"
	I1027 18:57:12.551483       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 18:57:12.643682       1 config.go:200] "Starting service config controller"
	I1027 18:57:12.643795       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 18:57:12.644779       1 config.go:106] "Starting endpoint slice config controller"
	I1027 18:57:12.644795       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 18:57:12.644821       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 18:57:12.644825       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 18:57:12.652942       1 config.go:309] "Starting node config controller"
	I1027 18:57:12.654707       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 18:57:12.654716       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 18:57:12.746008       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 18:57:12.746581       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 18:57:12.760983       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [23a81c0c110d4803997f153b474b99fa2c8dd49df03bcd06e0deab806e84e522] <==
	E1027 18:57:02.235831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 18:57:02.235898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 18:57:02.236336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 18:57:02.236405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 18:57:02.236138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 18:57:02.236633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 18:57:02.236754       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 18:57:02.237054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 18:57:02.237146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 18:57:02.237161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 18:57:03.169999       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 18:57:03.170507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 18:57:03.173314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 18:57:03.241827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 18:57:03.244384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 18:57:03.277509       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 18:57:03.311109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 18:57:03.348245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 18:57:03.360178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1027 18:57:03.360672       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 18:57:03.390147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 18:57:03.532742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 18:57:03.622727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 18:57:03.635923       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1027 18:57:06.218759       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 19:04:41 addons-864929 kubelet[1502]: E1027 19:04:41.018370    1502 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Oct 27 19:04:41 addons-864929 kubelet[1502]: E1027 19:04:41.018440    1502 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Oct 27 19:04:41 addons-864929 kubelet[1502]: E1027 19:04:41.018988    1502 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod test-local-path_default(4d1f2112-b21d-4876-abde-84c8de8078a0): ErrImagePull: reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 27 19:04:41 addons-864929 kubelet[1502]: E1027 19:04:41.019038    1502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ErrImagePull: \"reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="4d1f2112-b21d-4876-abde-84c8de8078a0"
	Oct 27 19:04:45 addons-864929 kubelet[1502]: E1027 19:04:45.811128    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761591885810732142  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:552224}  inodes_used:{value:191}}"
	Oct 27 19:04:45 addons-864929 kubelet[1502]: E1027 19:04:45.811174    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761591885810732142  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:552224}  inodes_used:{value:191}}"
	Oct 27 19:04:53 addons-864929 kubelet[1502]: E1027 19:04:53.309788    1502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="4d1f2112-b21d-4876-abde-84c8de8078a0"
	Oct 27 19:04:55 addons-864929 kubelet[1502]: E1027 19:04:55.813949    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761591895813310120  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:552224}  inodes_used:{value:191}}"
	Oct 27 19:04:55 addons-864929 kubelet[1502]: E1027 19:04:55.813974    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761591895813310120  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:552224}  inodes_used:{value:191}}"
	Oct 27 19:05:05 addons-864929 kubelet[1502]: E1027 19:05:05.313239    1502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="4d1f2112-b21d-4876-abde-84c8de8078a0"
	Oct 27 19:05:05 addons-864929 kubelet[1502]: E1027 19:05:05.816759    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761591905816312649  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:552224}  inodes_used:{value:191}}"
	Oct 27 19:05:05 addons-864929 kubelet[1502]: E1027 19:05:05.816805    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761591905816312649  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:552224}  inodes_used:{value:191}}"
	Oct 27 19:05:11 addons-864929 kubelet[1502]: E1027 19:05:11.119980    1502 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 27 19:05:11 addons-864929 kubelet[1502]: E1027 19:05:11.120049    1502 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 27 19:05:11 addons-864929 kubelet[1502]: E1027 19:05:11.120224    1502 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(504b682e-4d7e-4f98-913e-efaa9ccfd4a1): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 27 19:05:11 addons-864929 kubelet[1502]: E1027 19:05:11.120255    1502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="504b682e-4d7e-4f98-913e-efaa9ccfd4a1"
	Oct 27 19:05:15 addons-864929 kubelet[1502]: E1027 19:05:15.820071    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761591915819529286  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:552224}  inodes_used:{value:191}}"
	Oct 27 19:05:15 addons-864929 kubelet[1502]: E1027 19:05:15.820099    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761591915819529286  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:552224}  inodes_used:{value:191}}"
	Oct 27 19:05:17 addons-864929 kubelet[1502]: E1027 19:05:17.307854    1502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="4d1f2112-b21d-4876-abde-84c8de8078a0"
	Oct 27 19:05:25 addons-864929 kubelet[1502]: I1027 19:05:25.306495    1502 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-zg4tw" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 19:05:25 addons-864929 kubelet[1502]: E1027 19:05:25.823158    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761591925822688335  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:552224}  inodes_used:{value:191}}"
	Oct 27 19:05:25 addons-864929 kubelet[1502]: E1027 19:05:25.823193    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761591925822688335  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:552224}  inodes_used:{value:191}}"
	Oct 27 19:05:26 addons-864929 kubelet[1502]: E1027 19:05:26.305333    1502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="504b682e-4d7e-4f98-913e-efaa9ccfd4a1"
	Oct 27 19:05:28 addons-864929 kubelet[1502]: E1027 19:05:28.306725    1502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="4d1f2112-b21d-4876-abde-84c8de8078a0"
	Oct 27 19:05:30 addons-864929 kubelet[1502]: I1027 19:05:30.305820    1502 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [9580ed2258f1ddc819f6b60b3c2ef2524bf0b58aa70e0aff2439347be11df4e9] <==
	W1027 19:05:08.401578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:05:10.407366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:05:10.413014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:05:12.415666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:05:12.422055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:05:14.425540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:05:14.431097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:05:16.435503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:05:16.443395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:05:18.447467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:05:18.455195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:05:20.458396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:05:20.463858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:05:22.467365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:05:22.472871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:05:24.476471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:05:24.481655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:05:26.485023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:05:26.493188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:05:28.497071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:05:28.508526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:05:30.513254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:05:30.519319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:05:32.524032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:05:32.535021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-864929 -n addons-864929
helpers_test.go:269: (dbg) Run:  kubectl --context addons-864929 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-wmhrh task-pv-pod test-local-path
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-864929 describe pod hello-world-app-5d498dc89-wmhrh task-pv-pod test-local-path
helpers_test.go:290: (dbg) kubectl --context addons-864929 describe pod hello-world-app-5d498dc89-wmhrh task-pv-pod test-local-path:

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-wmhrh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-864929/192.168.39.216
	Start Time:       Mon, 27 Oct 2025 19:01:37 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.31
	IPs:
	  IP:           10.244.0.31
	Controlled By:  ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xpvrk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xpvrk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m56s                default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-wmhrh to addons-864929
	  Warning  Failed     3m7s                 kubelet            Failed to pull image "docker.io/kicbase/echo-server:1.0": reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     83s (x2 over 3m7s)   kubelet            Error: ErrImagePull
	  Warning  Failed     83s                  kubelet            Failed to pull image "docker.io/kicbase/echo-server:1.0": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    68s (x2 over 3m7s)   kubelet            Back-off pulling image "docker.io/kicbase/echo-server:1.0"
	  Warning  Failed     68s (x2 over 3m7s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    53s (x3 over 3m56s)  kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-864929/192.168.39.216
	Start Time:       Mon, 27 Oct 2025 18:59:31 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6h8cn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-6h8cn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  6m2s                default-scheduler  Successfully assigned default/task-pv-pod to addons-864929
	  Normal   Pulling    77s (x4 over 6m1s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     22s (x4 over 5m7s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     22s (x4 over 5m7s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    7s (x6 over 5m7s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     7s (x6 over 5m7s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-864929/192.168.39.216
	Start Time:       Mon, 27 Oct 2025 18:59:25 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mgjnr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-mgjnr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m8s                 default-scheduler  Successfully assigned default/test-local-path to addons-864929
	  Warning  Failed     4m7s                 kubelet            Failed to pull image "busybox:stable": fetching target platform image selected from image index: reading manifest sha256:4a35a7836fe08f340a42e25c4ac5eef4439585bbbb817b7bd28b2cd87c742642 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    103s (x4 over 6m8s)  kubelet            Pulling image "busybox:stable"
	  Warning  Failed     52s (x3 over 5m38s)  kubelet            Failed to pull image "busybox:stable": reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     52s (x4 over 5m38s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    5s (x9 over 5m37s)   kubelet            Back-off pulling image "busybox:stable"
	  Warning  Failed     5s (x9 over 5m37s)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestAddons/parallel/CSI FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-864929 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-864929 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-864929 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.03724757s)
--- FAIL: TestAddons/parallel/CSI (383.60s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (229.68s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-864929 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-864929 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864929 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864929 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864929 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864929 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-864929 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [4d1f2112-b21d-4876-abde-84c8de8078a0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
addons_test.go:962: ***** TestAddons/parallel/LocalPath: pod "run=test-local-path" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:962: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-864929 -n addons-864929
addons_test.go:962: TestAddons/parallel/LocalPath: showing logs for failed pods as of 2025-10-27 19:02:25.575220182 +0000 UTC m=+373.499762289
addons_test.go:962: (dbg) Run:  kubectl --context addons-864929 describe po test-local-path -n default
addons_test.go:962: (dbg) kubectl --context addons-864929 describe po test-local-path -n default:
Name:             test-local-path
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-864929/192.168.39.216
Start Time:       Mon, 27 Oct 2025 18:59:25 +0000
Labels:           run=test-local-path
Annotations:      <none>
Status:           Pending
IP:               10.244.0.29
IPs:
IP:  10.244.0.29
Containers:
busybox:
Container ID:  
Image:         busybox:stable
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
sh
-c
echo 'local-path-provisioner' > /test/file1
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/test from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mgjnr (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
data:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  test-pvc
ReadOnly:   false
kube-api-access-mgjnr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  3m                   default-scheduler  Successfully assigned default/test-local-path to addons-864929
Warning  Failed     2m30s                kubelet            Failed to pull image "busybox:stable": reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     59s (x2 over 2m30s)  kubelet            Error: ErrImagePull
Warning  Failed     59s                  kubelet            Failed to pull image "busybox:stable": fetching target platform image selected from image index: reading manifest sha256:4a35a7836fe08f340a42e25c4ac5eef4439585bbbb817b7bd28b2cd87c742642 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    48s (x2 over 2m29s)  kubelet            Back-off pulling image "busybox:stable"
Warning  Failed     48s (x2 over 2m29s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    36s (x3 over 3m)     kubelet            Pulling image "busybox:stable"
addons_test.go:962: (dbg) Run:  kubectl --context addons-864929 logs test-local-path -n default
addons_test.go:962: (dbg) Non-zero exit: kubectl --context addons-864929 logs test-local-path -n default: exit status 1 (72.447893ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "busybox" in pod "test-local-path" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:962: kubectl --context addons-864929 logs test-local-path -n default: exit status 1
addons_test.go:963: failed waiting for test-local-path pod: run=test-local-path within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/LocalPath]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-864929 -n addons-864929
helpers_test.go:252: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-864929 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-864929 logs -n 25: (1.348072093s)
helpers_test.go:260: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-343850                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-343850 │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ delete  │ -p download-only-021762                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-021762 │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ delete  │ -p download-only-343850                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-343850 │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ start   │ --download-only -p binary-mirror-001257 --alsologtostderr --binary-mirror http://127.0.0.1:33585 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-001257 │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	│ delete  │ -p binary-mirror-001257                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-001257 │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ addons  │ enable dashboard -p addons-864929                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	│ addons  │ disable dashboard -p addons-864929                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	│ start   │ -p addons-864929 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:58 UTC │
	│ addons  │ addons-864929 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:58 UTC │ 27 Oct 25 18:58 UTC │
	│ addons  │ addons-864929 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:58 UTC │ 27 Oct 25 18:59 UTC │
	│ addons  │ enable headlamp -p addons-864929 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ addons  │ addons-864929 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ addons  │ addons-864929 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ addons  │ addons-864929 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-864929                                                                                                                                                                                                                                                                                                                                                                                         │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ addons  │ addons-864929 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ addons  │ addons-864929 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ ip      │ addons-864929 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ addons  │ addons-864929 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ ssh     │ addons-864929 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │                     │
	│ addons  │ addons-864929 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ addons  │ addons-864929 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ ip      │ addons-864929 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 19:01 UTC │ 27 Oct 25 19:01 UTC │
	│ addons  │ addons-864929 addons disable ingress-dns --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 19:01 UTC │ 27 Oct 25 19:01 UTC │
	│ addons  │ addons-864929 addons disable ingress --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-864929        │ jenkins │ v1.37.0 │ 27 Oct 25 19:01 UTC │ 27 Oct 25 19:01 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 18:56:24
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 18:56:24.622422   63277 out.go:360] Setting OutFile to fd 1 ...
	I1027 18:56:24.622686   63277 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:56:24.622698   63277 out.go:374] Setting ErrFile to fd 2...
	I1027 18:56:24.622702   63277 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:56:24.622910   63277 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-58821/.minikube/bin
	I1027 18:56:24.623413   63277 out.go:368] Setting JSON to false
	I1027 18:56:24.624309   63277 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5935,"bootTime":1761585450,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 18:56:24.624396   63277 start.go:141] virtualization: kvm guest
	I1027 18:56:24.626201   63277 out.go:179] * [addons-864929] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 18:56:24.627811   63277 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 18:56:24.627823   63277 notify.go:220] Checking for updates...
	I1027 18:56:24.630357   63277 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 18:56:24.631602   63277 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-58821/kubeconfig
	I1027 18:56:24.632948   63277 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-58821/.minikube
	I1027 18:56:24.634382   63277 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 18:56:24.635581   63277 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 18:56:24.637140   63277 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 18:56:24.668548   63277 out.go:179] * Using the kvm2 driver based on user configuration
	I1027 18:56:24.669928   63277 start.go:305] selected driver: kvm2
	I1027 18:56:24.669964   63277 start.go:925] validating driver "kvm2" against <nil>
	I1027 18:56:24.669977   63277 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 18:56:24.670794   63277 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1027 18:56:24.671024   63277 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 18:56:24.671068   63277 cni.go:84] Creating CNI manager for ""
	I1027 18:56:24.671115   63277 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 18:56:24.671129   63277 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1027 18:56:24.671178   63277 start.go:349] cluster config:
	{Name:addons-864929 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-864929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1027 18:56:24.671272   63277 iso.go:125] acquiring lock: {Name:mkbd04910579486806c142a651be4f82498c73ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 18:56:24.672823   63277 out.go:179] * Starting "addons-864929" primary control-plane node in "addons-864929" cluster
	I1027 18:56:24.674049   63277 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 18:56:24.674093   63277 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-58821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 18:56:24.674104   63277 cache.go:58] Caching tarball of preloaded images
	I1027 18:56:24.674220   63277 preload.go:233] Found /home/jenkins/minikube-integration/21801-58821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 18:56:24.674236   63277 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 18:56:24.674548   63277 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/config.json ...
	I1027 18:56:24.674571   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/config.json: {Name:mk9ba1259c08877b5975916a854db91dcc4ee818 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:24.674732   63277 start.go:360] acquireMachinesLock for addons-864929: {Name:mk93a855054c8dcf81931234082a94fdc68a4726 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1027 18:56:24.674798   63277 start.go:364] duration metric: took 48.986µs to acquireMachinesLock for "addons-864929"
	I1027 18:56:24.674823   63277 start.go:93] Provisioning new machine with config: &{Name:addons-864929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-864929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 18:56:24.674873   63277 start.go:125] createHost starting for "" (driver="kvm2")
	I1027 18:56:24.676393   63277 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1027 18:56:24.676558   63277 start.go:159] libmachine.API.Create for "addons-864929" (driver="kvm2")
	I1027 18:56:24.676590   63277 client.go:168] LocalClient.Create starting
	I1027 18:56:24.676678   63277 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem
	I1027 18:56:24.780202   63277 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/cert.pem
	I1027 18:56:24.900124   63277 main.go:141] libmachine: creating domain...
	I1027 18:56:24.900145   63277 main.go:141] libmachine: creating network...
	I1027 18:56:24.901617   63277 main.go:141] libmachine: found existing default network
	I1027 18:56:24.901796   63277 main.go:141] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1027 18:56:24.902284   63277 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d609d0}
	I1027 18:56:24.902387   63277 main.go:141] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-864929</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1027 18:56:24.908158   63277 main.go:141] libmachine: creating private network mk-addons-864929 192.168.39.0/24...
	I1027 18:56:24.980252   63277 main.go:141] libmachine: private network mk-addons-864929 192.168.39.0/24 created
	I1027 18:56:24.980545   63277 main.go:141] libmachine: <network>
	  <name>mk-addons-864929</name>
	  <uuid>aef0d375-daa4-4865-b6ed-55a30809a7b8</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:71:bd:45'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1027 18:56:24.980576   63277 main.go:141] libmachine: setting up store path in /home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929 ...
	I1027 18:56:24.980605   63277 main.go:141] libmachine: building disk image from file:///home/jenkins/minikube-integration/21801-58821/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1027 18:56:24.980620   63277 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21801-58821/.minikube
	I1027 18:56:24.980717   63277 main.go:141] libmachine: Downloading /home/jenkins/minikube-integration/21801-58821/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21801-58821/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
	I1027 18:56:25.217277   63277 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa...
	I1027 18:56:25.365950   63277 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/addons-864929.rawdisk...
	I1027 18:56:25.365998   63277 main.go:141] libmachine: Writing magic tar header
	I1027 18:56:25.366060   63277 main.go:141] libmachine: Writing SSH key tar header
	I1027 18:56:25.366173   63277 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929 ...
	I1027 18:56:25.366260   63277 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929
	I1027 18:56:25.366305   63277 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929 (perms=drwx------)
	I1027 18:56:25.366334   63277 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21801-58821/.minikube/machines
	I1027 18:56:25.366351   63277 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21801-58821/.minikube/machines (perms=drwxr-xr-x)
	I1027 18:56:25.366370   63277 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21801-58821/.minikube
	I1027 18:56:25.366382   63277 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21801-58821/.minikube (perms=drwxr-xr-x)
	I1027 18:56:25.366392   63277 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21801-58821
	I1027 18:56:25.366400   63277 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21801-58821 (perms=drwxrwxr-x)
	I1027 18:56:25.366413   63277 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1027 18:56:25.366429   63277 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1027 18:56:25.366447   63277 main.go:141] libmachine: checking permissions on dir: /home/jenkins
	I1027 18:56:25.366462   63277 main.go:141] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1027 18:56:25.366477   63277 main.go:141] libmachine: checking permissions on dir: /home
	I1027 18:56:25.366489   63277 main.go:141] libmachine: skipping /home - not owner
	I1027 18:56:25.366496   63277 main.go:141] libmachine: defining domain...
	I1027 18:56:25.367845   63277 main.go:141] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-864929</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/addons-864929.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-864929'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1027 18:56:25.373162   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:b7:94:cf in network default
	I1027 18:56:25.374053   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:25.374075   63277 main.go:141] libmachine: starting domain...
	I1027 18:56:25.374080   63277 main.go:141] libmachine: ensuring networks are active...
	I1027 18:56:25.374872   63277 main.go:141] libmachine: Ensuring network default is active
	I1027 18:56:25.375277   63277 main.go:141] libmachine: Ensuring network mk-addons-864929 is active
	I1027 18:56:25.375873   63277 main.go:141] libmachine: getting domain XML...
	I1027 18:56:25.376860   63277 main.go:141] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-864929</name>
	  <uuid>780db33d-391d-49ad-b77a-2a509bc06274</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/addons-864929.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:f3:30:05'/>
	      <source network='mk-addons-864929'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:b7:94:cf'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1027 18:56:26.638954   63277 main.go:141] libmachine: waiting for domain to start...
	I1027 18:56:26.640594   63277 main.go:141] libmachine: domain is now running
	I1027 18:56:26.640612   63277 main.go:141] libmachine: waiting for IP...
	I1027 18:56:26.641493   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:26.642006   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:26.642018   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:26.642278   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:26.642335   63277 retry.go:31] will retry after 204.12408ms: waiting for domain to come up
	I1027 18:56:26.847933   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:26.848726   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:26.848744   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:26.849096   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:26.849145   63277 retry.go:31] will retry after 259.734271ms: waiting for domain to come up
	I1027 18:56:27.110506   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:27.111193   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:27.111211   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:27.111565   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:27.111600   63277 retry.go:31] will retry after 353.747338ms: waiting for domain to come up
	I1027 18:56:27.467217   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:27.467990   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:27.468008   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:27.468404   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:27.468443   63277 retry.go:31] will retry after 408.188052ms: waiting for domain to come up
	I1027 18:56:27.877925   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:27.878585   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:27.878600   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:27.878986   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:27.879025   63277 retry.go:31] will retry after 584.807504ms: waiting for domain to come up
	I1027 18:56:28.465800   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:28.466457   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:28.466477   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:28.466925   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:28.466985   63277 retry.go:31] will retry after 655.104002ms: waiting for domain to come up
	I1027 18:56:29.123804   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:29.124507   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:29.124524   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:29.124825   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:29.124862   63277 retry.go:31] will retry after 1.151715647s: waiting for domain to come up
	I1027 18:56:30.278089   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:30.278736   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:30.278753   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:30.279106   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:30.279148   63277 retry.go:31] will retry after 899.383524ms: waiting for domain to come up
	I1027 18:56:31.180495   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:31.181365   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:31.181386   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:31.181743   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:31.181784   63277 retry.go:31] will retry after 1.154847749s: waiting for domain to come up
	I1027 18:56:32.337959   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:32.338631   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:32.338648   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:32.339016   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:32.339058   63277 retry.go:31] will retry after 1.618753171s: waiting for domain to come up
	I1027 18:56:33.960150   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:33.960873   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:33.960906   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:33.961382   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:33.961433   63277 retry.go:31] will retry after 2.574218898s: waiting for domain to come up
	I1027 18:56:36.537741   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:36.538394   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:36.538410   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:36.538756   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:36.538790   63277 retry.go:31] will retry after 3.021550252s: waiting for domain to come up
	I1027 18:56:39.563948   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:39.564552   63277 main.go:141] libmachine: no network interface addresses found for domain addons-864929 (source=lease)
	I1027 18:56:39.564573   63277 main.go:141] libmachine: trying to list again with source=arp
	I1027 18:56:39.564876   63277 main.go:141] libmachine: unable to find current IP address of domain addons-864929 in network mk-addons-864929 (interfaces detected: [])
	I1027 18:56:39.564921   63277 retry.go:31] will retry after 3.629212065s: waiting for domain to come up
	I1027 18:56:43.197968   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.198898   63277 main.go:141] libmachine: domain addons-864929 has current primary IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.198915   63277 main.go:141] libmachine: found domain IP: 192.168.39.216
	I1027 18:56:43.198925   63277 main.go:141] libmachine: reserving static IP address...
	I1027 18:56:43.199329   63277 main.go:141] libmachine: unable to find host DHCP lease matching {name: "addons-864929", mac: "52:54:00:f3:30:05", ip: "192.168.39.216"} in network mk-addons-864929
	I1027 18:56:43.451430   63277 main.go:141] libmachine: reserved static IP address 192.168.39.216 for domain addons-864929
	I1027 18:56:43.451477   63277 main.go:141] libmachine: waiting for SSH...
	I1027 18:56:43.451483   63277 main.go:141] libmachine: Getting to WaitForSSH function...
	I1027 18:56:43.455019   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.455546   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:43.455575   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.455753   63277 main.go:141] libmachine: Using SSH client type: native
	I1027 18:56:43.456085   63277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1027 18:56:43.456098   63277 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1027 18:56:43.560285   63277 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 18:56:43.560764   63277 main.go:141] libmachine: domain creation complete
	I1027 18:56:43.562456   63277 machine.go:93] provisionDockerMachine start ...
	I1027 18:56:43.564923   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.565392   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:43.565416   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.565609   63277 main.go:141] libmachine: Using SSH client type: native
	I1027 18:56:43.565938   63277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1027 18:56:43.565959   63277 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 18:56:43.669544   63277 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1027 18:56:43.669580   63277 buildroot.go:166] provisioning hostname "addons-864929"
	I1027 18:56:43.672967   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.673411   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:43.673440   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.673604   63277 main.go:141] libmachine: Using SSH client type: native
	I1027 18:56:43.673806   63277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1027 18:56:43.673817   63277 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-864929 && echo "addons-864929" | sudo tee /etc/hostname
	I1027 18:56:43.795625   63277 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-864929
	
	I1027 18:56:43.798861   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.799296   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:43.799317   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.799492   63277 main.go:141] libmachine: Using SSH client type: native
	I1027 18:56:43.799700   63277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1027 18:56:43.799715   63277 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-864929' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-864929/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-864929' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 18:56:43.910892   63277 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 18:56:43.910939   63277 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21801-58821/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-58821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-58821/.minikube}
	I1027 18:56:43.910981   63277 buildroot.go:174] setting up certificates
	I1027 18:56:43.910994   63277 provision.go:84] configureAuth start
	I1027 18:56:43.913915   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.914336   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:43.914362   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.916504   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.916890   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:43.916954   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:43.917128   63277 provision.go:143] copyHostCerts
	I1027 18:56:43.917210   63277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-58821/.minikube/ca.pem (1078 bytes)
	I1027 18:56:43.917348   63277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-58821/.minikube/cert.pem (1123 bytes)
	I1027 18:56:43.917476   63277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-58821/.minikube/key.pem (1675 bytes)
	I1027 18:56:43.917558   63277 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-58821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca-key.pem org=jenkins.addons-864929 san=[127.0.0.1 192.168.39.216 addons-864929 localhost minikube]
	I1027 18:56:44.249940   63277 provision.go:177] copyRemoteCerts
	I1027 18:56:44.250009   63277 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 18:56:44.252895   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.253468   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:44.253497   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.253713   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:56:44.336145   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 18:56:44.366470   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1027 18:56:44.396879   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 18:56:44.427777   63277 provision.go:87] duration metric: took 516.764566ms to configureAuth
	I1027 18:56:44.427808   63277 buildroot.go:189] setting minikube options for container-runtime
	I1027 18:56:44.428052   63277 config.go:182] Loaded profile config "addons-864929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:56:44.430830   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.431257   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:44.431285   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.431516   63277 main.go:141] libmachine: Using SSH client type: native
	I1027 18:56:44.431741   63277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1027 18:56:44.431759   63277 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 18:56:44.684141   63277 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 18:56:44.684169   63277 machine.go:96] duration metric: took 1.121694006s to provisionDockerMachine
	I1027 18:56:44.684180   63277 client.go:171] duration metric: took 20.007583494s to LocalClient.Create
	I1027 18:56:44.684313   63277 start.go:167] duration metric: took 20.00763875s to libmachine.API.Create "addons-864929"
	I1027 18:56:44.684443   63277 start.go:293] postStartSetup for "addons-864929" (driver="kvm2")
	I1027 18:56:44.684457   63277 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 18:56:44.684684   63277 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 18:56:44.687967   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.688366   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:44.688388   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.688532   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:56:44.773838   63277 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 18:56:44.779587   63277 info.go:137] Remote host: Buildroot 2025.02
	I1027 18:56:44.779618   63277 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-58821/.minikube/addons for local assets ...
	I1027 18:56:44.779720   63277 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-58821/.minikube/files for local assets ...
	I1027 18:56:44.779744   63277 start.go:296] duration metric: took 95.294071ms for postStartSetup
	I1027 18:56:44.783531   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.783956   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:44.783992   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.784296   63277 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/config.json ...
	I1027 18:56:44.784513   63277 start.go:128] duration metric: took 20.109628328s to createHost
	I1027 18:56:44.787202   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.787607   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:44.787630   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.787827   63277 main.go:141] libmachine: Using SSH client type: native
	I1027 18:56:44.788095   63277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1027 18:56:44.788112   63277 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1027 18:56:44.892155   63277 main.go:141] libmachine: SSH cmd err, output: <nil>: 1761591404.854722623
	
	I1027 18:56:44.892187   63277 fix.go:216] guest clock: 1761591404.854722623
	I1027 18:56:44.892195   63277 fix.go:229] Guest: 2025-10-27 18:56:44.854722623 +0000 UTC Remote: 2025-10-27 18:56:44.784525373 +0000 UTC m=+20.209597039 (delta=70.19725ms)
	I1027 18:56:44.892213   63277 fix.go:200] guest clock delta is within tolerance: 70.19725ms
	I1027 18:56:44.892218   63277 start.go:83] releasing machines lock for "addons-864929", held for 20.217407876s
	I1027 18:56:44.895316   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.895759   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:44.895786   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.896530   63277 ssh_runner.go:195] Run: cat /version.json
	I1027 18:56:44.896625   63277 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 18:56:44.899743   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.899867   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.900211   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:44.900246   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.900407   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:44.900437   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:44.900431   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:56:44.900649   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:56:44.976028   63277 ssh_runner.go:195] Run: systemctl --version
	I1027 18:56:45.001174   63277 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 18:56:45.161871   63277 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 18:56:45.169373   63277 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 18:56:45.169442   63277 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 18:56:45.190185   63277 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1027 18:56:45.190215   63277 start.go:495] detecting cgroup driver to use...
	I1027 18:56:45.190307   63277 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 18:56:45.209752   63277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 18:56:45.232403   63277 docker.go:218] disabling cri-docker service (if available) ...
	I1027 18:56:45.232474   63277 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 18:56:45.253470   63277 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 18:56:45.271232   63277 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 18:56:45.419310   63277 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 18:56:45.638393   63277 docker.go:234] disabling docker service ...
	I1027 18:56:45.638482   63277 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 18:56:45.655615   63277 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 18:56:45.671872   63277 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 18:56:45.833201   63277 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 18:56:45.978905   63277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 18:56:45.995588   63277 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 18:56:46.019765   63277 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 18:56:46.019841   63277 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:46.033497   63277 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 18:56:46.033570   63277 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:46.047513   63277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:46.060521   63277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:46.074441   63277 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 18:56:46.088325   63277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:46.101213   63277 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:46.122423   63277 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:46.135007   63277 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 18:56:46.146221   63277 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1027 18:56:46.146284   63277 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1027 18:56:46.169839   63277 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 18:56:46.183407   63277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 18:56:46.324987   63277 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 18:56:46.440290   63277 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 18:56:46.440374   63277 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 18:56:46.446158   63277 start.go:563] Will wait 60s for crictl version
	I1027 18:56:46.446240   63277 ssh_runner.go:195] Run: which crictl
	I1027 18:56:46.450614   63277 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1027 18:56:46.496013   63277 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1027 18:56:46.496113   63277 ssh_runner.go:195] Run: crio --version
	I1027 18:56:46.526418   63277 ssh_runner.go:195] Run: crio --version
	I1027 18:56:46.560428   63277 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1027 18:56:46.564607   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:46.565084   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:56:46.565113   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:56:46.565366   63277 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1027 18:56:46.570158   63277 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 18:56:46.586255   63277 kubeadm.go:883] updating cluster {Name:addons-864929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-864929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.216 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 18:56:46.586379   63277 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 18:56:46.586431   63277 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 18:56:46.623555   63277 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1027 18:56:46.623625   63277 ssh_runner.go:195] Run: which lz4
	I1027 18:56:46.628237   63277 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1027 18:56:46.633510   63277 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1027 18:56:46.633544   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1027 18:56:48.156071   63277 crio.go:462] duration metric: took 1.527888186s to copy over tarball
	I1027 18:56:48.156150   63277 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1027 18:56:49.783875   63277 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.627696709s)
	I1027 18:56:49.783899   63277 crio.go:469] duration metric: took 1.627800498s to extract the tarball
	I1027 18:56:49.783908   63277 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1027 18:56:49.829229   63277 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 18:56:49.875294   63277 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 18:56:49.875323   63277 cache_images.go:85] Images are preloaded, skipping loading
	I1027 18:56:49.875334   63277 kubeadm.go:934] updating node { 192.168.39.216 8443 v1.34.1 crio true true} ...
	I1027 18:56:49.875442   63277 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-864929 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.216
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-864929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 18:56:49.875581   63277 ssh_runner.go:195] Run: crio config
	I1027 18:56:49.932154   63277 cni.go:84] Creating CNI manager for ""
	I1027 18:56:49.932179   63277 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 18:56:49.932200   63277 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 18:56:49.932223   63277 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.216 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-864929 NodeName:addons-864929 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.216"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.216 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 18:56:49.932364   63277 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.216
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-864929"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.216"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.216"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 18:56:49.932437   63277 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 18:56:49.945627   63277 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 18:56:49.945703   63277 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 18:56:49.959045   63277 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1027 18:56:49.983292   63277 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 18:56:50.007675   63277 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1027 18:56:50.032663   63277 ssh_runner.go:195] Run: grep 192.168.39.216	control-plane.minikube.internal$ /etc/hosts
	I1027 18:56:50.037426   63277 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.216	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 18:56:50.053663   63277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 18:56:50.200983   63277 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 18:56:50.242073   63277 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929 for IP: 192.168.39.216
	I1027 18:56:50.242097   63277 certs.go:195] generating shared ca certs ...
	I1027 18:56:50.242119   63277 certs.go:227] acquiring lock for ca certs: {Name:mk3c1c890b4611f9f1a3f97b9046837227a16799 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:50.242309   63277 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-58821/.minikube/ca.key
	I1027 18:56:50.542245   63277 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-58821/.minikube/ca.crt ...
	I1027 18:56:50.542277   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/ca.crt: {Name:mkb0b7411ce05946b9a6d920de38fad3ab6c6a5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:50.542460   63277 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-58821/.minikube/ca.key ...
	I1027 18:56:50.542471   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/ca.key: {Name:mk283eb2e002819e788fa8f18c386299d47777a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:50.542548   63277 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.key
	I1027 18:56:50.638160   63277 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.crt ...
	I1027 18:56:50.638191   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.crt: {Name:mk8a0909df9310cadf02928e1cc040e0903818db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:50.638365   63277 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.key ...
	I1027 18:56:50.638377   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.key: {Name:mk4aa59bab040235f70f65aa2d7af7f89bd4659d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:50.638460   63277 certs.go:257] generating profile certs ...
	I1027 18:56:50.638519   63277 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.key
	I1027 18:56:50.638549   63277 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt with IP's: []
	I1027 18:56:50.779809   63277 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt ...
	I1027 18:56:50.779847   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: {Name:mka2b9867ee328b7112768834356aaca6b5fc109 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:50.780044   63277 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.key ...
	I1027 18:56:50.780059   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.key: {Name:mkcbab4e1e83774a62e689c6d7789d3eb343f864 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:50.780139   63277 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.key.782a817d
	I1027 18:56:50.780161   63277 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.crt.782a817d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.216]
	I1027 18:56:51.313872   63277 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.crt.782a817d ...
	I1027 18:56:51.313911   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.crt.782a817d: {Name:mk4942a380088e956850812de28b65602aee81bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:51.314117   63277 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.key.782a817d ...
	I1027 18:56:51.314132   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.key.782a817d: {Name:mk2bf51af3cc29c0e7479b746ffe650e8b348547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:51.314226   63277 certs.go:382] copying /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.crt.782a817d -> /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.crt
	I1027 18:56:51.314298   63277 certs.go:386] copying /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.key.782a817d -> /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.key
	I1027 18:56:51.314355   63277 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/proxy-client.key
	I1027 18:56:51.314373   63277 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/proxy-client.crt with IP's: []
	I1027 18:56:51.489257   63277 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/proxy-client.crt ...
	I1027 18:56:51.489292   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/proxy-client.crt: {Name:mk6be1958bd7a086d707056124a43ee705cf8efa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:51.489483   63277 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/proxy-client.key ...
	I1027 18:56:51.489496   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/proxy-client.key: {Name:mkedbe974c66eb2183a2d8824fcd1a064e7f0629 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:51.489667   63277 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca-key.pem (1679 bytes)
	I1027 18:56:51.489699   63277 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem (1078 bytes)
	I1027 18:56:51.489734   63277 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/cert.pem (1123 bytes)
	I1027 18:56:51.489756   63277 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/key.pem (1675 bytes)
	I1027 18:56:51.490337   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 18:56:51.527261   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 18:56:51.566595   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 18:56:51.597942   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1027 18:56:51.630829   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1027 18:56:51.664688   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 18:56:51.696594   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 18:56:51.734852   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 18:56:51.770778   63277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 18:56:51.805559   63277 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 18:56:51.833421   63277 ssh_runner.go:195] Run: openssl version
	I1027 18:56:51.841743   63277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 18:56:51.857852   63277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 18:56:51.864612   63277 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1027 18:56:51.864680   63277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 18:56:51.873224   63277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 18:56:51.893213   63277 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 18:56:51.899405   63277 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 18:56:51.899464   63277 kubeadm.go:400] StartCluster: {Name:addons-864929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-864929 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.216 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 18:56:51.899550   63277 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 18:56:51.899604   63277 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 18:56:51.945935   63277 cri.go:89] found id: ""
	I1027 18:56:51.946016   63277 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 18:56:51.959289   63277 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 18:56:51.972387   63277 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 18:56:51.985164   63277 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 18:56:51.985182   63277 kubeadm.go:157] found existing configuration files:
	
	I1027 18:56:51.985239   63277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 18:56:51.997222   63277 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 18:56:51.997284   63277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 18:56:52.010322   63277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 18:56:52.022203   63277 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 18:56:52.022274   63277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 18:56:52.034805   63277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 18:56:52.046201   63277 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 18:56:52.046272   63277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 18:56:52.059475   63277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 18:56:52.070876   63277 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 18:56:52.070957   63277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 18:56:52.083713   63277 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1027 18:56:52.243337   63277 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 18:57:05.929419   63277 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1027 18:57:05.929514   63277 kubeadm.go:318] [preflight] Running pre-flight checks
	I1027 18:57:05.929629   63277 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 18:57:05.929750   63277 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 18:57:05.929840   63277 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 18:57:05.929894   63277 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 18:57:05.931664   63277 out.go:252]   - Generating certificates and keys ...
	I1027 18:57:05.931750   63277 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1027 18:57:05.931835   63277 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1027 18:57:05.931942   63277 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 18:57:05.932018   63277 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1027 18:57:05.932119   63277 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1027 18:57:05.932200   63277 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1027 18:57:05.932269   63277 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1027 18:57:05.932432   63277 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-864929 localhost] and IPs [192.168.39.216 127.0.0.1 ::1]
	I1027 18:57:05.932514   63277 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1027 18:57:05.932685   63277 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-864929 localhost] and IPs [192.168.39.216 127.0.0.1 ::1]
	I1027 18:57:05.932782   63277 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 18:57:05.932893   63277 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 18:57:05.932942   63277 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1027 18:57:05.932998   63277 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 18:57:05.933056   63277 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 18:57:05.933116   63277 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 18:57:05.933163   63277 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 18:57:05.933242   63277 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 18:57:05.933312   63277 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 18:57:05.933416   63277 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 18:57:05.933518   63277 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 18:57:05.934838   63277 out.go:252]   - Booting up control plane ...
	I1027 18:57:05.934938   63277 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 18:57:05.935072   63277 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 18:57:05.935153   63277 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 18:57:05.935254   63277 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 18:57:05.935331   63277 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 18:57:05.935413   63277 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 18:57:05.935480   63277 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 18:57:05.935513   63277 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1027 18:57:05.935618   63277 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 18:57:05.935705   63277 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 18:57:05.935754   63277 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.502502542s
	I1027 18:57:05.935827   63277 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 18:57:05.935892   63277 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.216:8443/livez
	I1027 18:57:05.935992   63277 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 18:57:05.936113   63277 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 18:57:05.936221   63277 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.069256284s
	I1027 18:57:05.936298   63277 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.735103952s
	I1027 18:57:05.936363   63277 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.003425011s
	I1027 18:57:05.936455   63277 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 18:57:05.936590   63277 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 18:57:05.936648   63277 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 18:57:05.936807   63277 kubeadm.go:318] [mark-control-plane] Marking the node addons-864929 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 18:57:05.936859   63277 kubeadm.go:318] [bootstrap-token] Using token: s2v11a.htd6rq4ivxisd01i
	I1027 18:57:05.938605   63277 out.go:252]   - Configuring RBAC rules ...
	I1027 18:57:05.938701   63277 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 18:57:05.938793   63277 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 18:57:05.938934   63277 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 18:57:05.939090   63277 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 18:57:05.939208   63277 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 18:57:05.939282   63277 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 18:57:05.939396   63277 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 18:57:05.939437   63277 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1027 18:57:05.939494   63277 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1027 18:57:05.939501   63277 kubeadm.go:318] 
	I1027 18:57:05.939571   63277 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1027 18:57:05.939578   63277 kubeadm.go:318] 
	I1027 18:57:05.939688   63277 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1027 18:57:05.939702   63277 kubeadm.go:318] 
	I1027 18:57:05.939738   63277 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1027 18:57:05.939802   63277 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 18:57:05.939870   63277 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 18:57:05.939883   63277 kubeadm.go:318] 
	I1027 18:57:05.939933   63277 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1027 18:57:05.939939   63277 kubeadm.go:318] 
	I1027 18:57:05.939985   63277 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 18:57:05.939991   63277 kubeadm.go:318] 
	I1027 18:57:05.940048   63277 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1027 18:57:05.940134   63277 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 18:57:05.940215   63277 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 18:57:05.940222   63277 kubeadm.go:318] 
	I1027 18:57:05.940329   63277 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 18:57:05.940400   63277 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1027 18:57:05.940406   63277 kubeadm.go:318] 
	I1027 18:57:05.940470   63277 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token s2v11a.htd6rq4ivxisd01i \
	I1027 18:57:05.940553   63277 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ab9d04ec7d88165f854ca6007f0db50cb21d439f87063d47c1cf645e122a460a \
	I1027 18:57:05.940572   63277 kubeadm.go:318] 	--control-plane 
	I1027 18:57:05.940578   63277 kubeadm.go:318] 
	I1027 18:57:05.940643   63277 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1027 18:57:05.940649   63277 kubeadm.go:318] 
	I1027 18:57:05.940731   63277 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token s2v11a.htd6rq4ivxisd01i \
	I1027 18:57:05.940833   63277 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ab9d04ec7d88165f854ca6007f0db50cb21d439f87063d47c1cf645e122a460a 
	I1027 18:57:05.940844   63277 cni.go:84] Creating CNI manager for ""
	I1027 18:57:05.940851   63277 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 18:57:05.943012   63277 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1027 18:57:05.944248   63277 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1027 18:57:05.965148   63277 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1027 18:57:05.989594   63277 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 18:57:05.989700   63277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:05.989727   63277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-864929 minikube.k8s.io/updated_at=2025_10_27T18_57_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f minikube.k8s.io/name=addons-864929 minikube.k8s.io/primary=true
	I1027 18:57:06.017183   63277 ops.go:34] apiserver oom_adj: -16
	I1027 18:57:06.172167   63277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:06.672287   63277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:07.173180   63277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:07.673264   63277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:08.172481   63277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:08.672997   63277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:09.173247   63277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:09.672863   63277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:10.172654   63277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:10.270470   63277 kubeadm.go:1113] duration metric: took 4.280852325s to wait for elevateKubeSystemPrivileges
	I1027 18:57:10.270507   63277 kubeadm.go:402] duration metric: took 18.371048599s to StartCluster
	I1027 18:57:10.270544   63277 settings.go:142] acquiring lock: {Name:mk19a39086427cb47b9bb78fd0b5176c91a751d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:10.270695   63277 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-58821/kubeconfig
	I1027 18:57:10.271083   63277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/kubeconfig: {Name:mk90c4d883178b7191d62a8cd99434bc24dd555f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:10.271332   63277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 18:57:10.271363   63277 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.216 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 18:57:10.271434   63277 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1027 18:57:10.271577   63277 addons.go:69] Setting yakd=true in profile "addons-864929"
	I1027 18:57:10.271588   63277 addons.go:69] Setting inspektor-gadget=true in profile "addons-864929"
	I1027 18:57:10.271607   63277 addons.go:238] Setting addon yakd=true in "addons-864929"
	I1027 18:57:10.271624   63277 addons.go:238] Setting addon inspektor-gadget=true in "addons-864929"
	I1027 18:57:10.271619   63277 addons.go:69] Setting default-storageclass=true in profile "addons-864929"
	I1027 18:57:10.271636   63277 addons.go:69] Setting registry-creds=true in profile "addons-864929"
	I1027 18:57:10.271644   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.271653   63277 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-864929"
	I1027 18:57:10.271661   63277 addons.go:69] Setting metrics-server=true in profile "addons-864929"
	I1027 18:57:10.271672   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.271678   63277 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-864929"
	I1027 18:57:10.271688   63277 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-864929"
	I1027 18:57:10.271662   63277 addons.go:69] Setting ingress=true in profile "addons-864929"
	I1027 18:57:10.271718   63277 addons.go:238] Setting addon ingress=true in "addons-864929"
	I1027 18:57:10.271723   63277 addons.go:69] Setting registry=true in profile "addons-864929"
	I1027 18:57:10.271735   63277 addons.go:238] Setting addon registry=true in "addons-864929"
	I1027 18:57:10.271751   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.271781   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.271779   63277 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-864929"
	I1027 18:57:10.271801   63277 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-864929"
	I1027 18:57:10.272335   63277 addons.go:69] Setting ingress-dns=true in profile "addons-864929"
	I1027 18:57:10.272359   63277 addons.go:238] Setting addon ingress-dns=true in "addons-864929"
	I1027 18:57:10.272388   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.272662   63277 addons.go:69] Setting storage-provisioner=true in profile "addons-864929"
	I1027 18:57:10.272684   63277 addons.go:238] Setting addon storage-provisioner=true in "addons-864929"
	I1027 18:57:10.272703   63277 addons.go:69] Setting volcano=true in profile "addons-864929"
	I1027 18:57:10.272719   63277 addons.go:69] Setting volumesnapshots=true in profile "addons-864929"
	I1027 18:57:10.272728   63277 addons.go:238] Setting addon volcano=true in "addons-864929"
	I1027 18:57:10.272731   63277 addons.go:238] Setting addon volumesnapshots=true in "addons-864929"
	I1027 18:57:10.272747   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.272709   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.271622   63277 config.go:182] Loaded profile config "addons-864929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:57:10.272915   63277 addons.go:69] Setting cloud-spanner=true in profile "addons-864929"
	I1027 18:57:10.272937   63277 addons.go:238] Setting addon cloud-spanner=true in "addons-864929"
	I1027 18:57:10.271674   63277 addons.go:238] Setting addon metrics-server=true in "addons-864929"
	I1027 18:57:10.272967   63277 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-864929"
	I1027 18:57:10.272979   63277 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-864929"
	I1027 18:57:10.272994   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.272962   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.271656   63277 addons.go:238] Setting addon registry-creds=true in "addons-864929"
	I1027 18:57:10.273353   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.271719   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.273804   63277 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-864929"
	I1027 18:57:10.272992   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.273875   63277 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-864929"
	I1027 18:57:10.273876   63277 out.go:179] * Verifying Kubernetes components...
	I1027 18:57:10.273923   63277 addons.go:69] Setting gcp-auth=true in profile "addons-864929"
	I1027 18:57:10.273943   63277 mustload.go:65] Loading cluster: addons-864929
	I1027 18:57:10.272753   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.273912   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.274165   63277 config.go:182] Loaded profile config "addons-864929": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:57:10.275351   63277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 18:57:10.280649   63277 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-864929"
	I1027 18:57:10.280696   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.280648   63277 addons.go:238] Setting addon default-storageclass=true in "addons-864929"
	I1027 18:57:10.280792   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:10.281457   63277 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1027 18:57:10.281464   63277 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1027 18:57:10.281464   63277 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1027 18:57:10.281472   63277 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1027 18:57:10.281475   63277 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1027 18:57:10.282784   63277 host.go:66] Checking if "addons-864929" exists ...
	W1027 18:57:10.283458   63277 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1027 18:57:10.284391   63277 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 18:57:10.284413   63277 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 18:57:10.284781   63277 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1027 18:57:10.284783   63277 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1027 18:57:10.284784   63277 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 18:57:10.284827   63277 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1027 18:57:10.285656   63277 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1027 18:57:10.285668   63277 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1027 18:57:10.285667   63277 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1027 18:57:10.286196   63277 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1027 18:57:10.285679   63277 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1027 18:57:10.285694   63277 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1027 18:57:10.285702   63277 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1027 18:57:10.285712   63277 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1027 18:57:10.285727   63277 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1027 18:57:10.285763   63277 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1027 18:57:10.286475   63277 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1027 18:57:10.287027   63277 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1027 18:57:10.287211   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1027 18:57:10.286535   63277 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1027 18:57:10.287783   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1027 18:57:10.287353   63277 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 18:57:10.287361   63277 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 18:57:10.288659   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 18:57:10.287367   63277 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1027 18:57:10.288803   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1027 18:57:10.288238   63277 out.go:179]   - Using image docker.io/registry:3.0.0
	I1027 18:57:10.288289   63277 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1027 18:57:10.289099   63277 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1027 18:57:10.289112   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1027 18:57:10.288511   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.289111   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1027 18:57:10.289229   63277 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1027 18:57:10.289243   63277 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1027 18:57:10.289702   63277 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1027 18:57:10.289754   63277 out.go:179]   - Using image docker.io/busybox:stable
	I1027 18:57:10.289812   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1027 18:57:10.290341   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.290649   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.291093   63277 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1027 18:57:10.291547   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.292319   63277 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 18:57:10.292764   63277 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1027 18:57:10.292901   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1027 18:57:10.293496   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.294077   63277 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1027 18:57:10.294199   63277 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1027 18:57:10.294235   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1027 18:57:10.294665   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.294862   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.294885   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.295658   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.296760   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.296778   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.296804   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.297672   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.298250   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.298288   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.298666   63277 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1027 18:57:10.298926   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.299336   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.299404   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.300642   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.301088   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.301156   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.301344   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.301372   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.301519   63277 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1027 18:57:10.301767   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.301854   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.302100   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.302209   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.302286   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.302408   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.302456   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.302745   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.302894   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.303125   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.303161   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.303130   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.303303   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.303406   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.303460   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.303507   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.303830   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.304098   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.304190   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.304220   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.304324   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.304342   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.304762   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.304791   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.304845   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.305002   63277 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1027 18:57:10.305135   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.305171   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.305224   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.305423   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.305445   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.305836   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.305863   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.306116   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:10.307841   63277 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1027 18:57:10.309061   63277 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1027 18:57:10.310163   63277 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1027 18:57:10.310201   63277 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1027 18:57:10.312870   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.313280   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:10.313301   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:10.313464   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	W1027 18:57:10.535116   63277 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54864->192.168.39.216:22: read: connection reset by peer
	I1027 18:57:10.535158   63277 retry.go:31] will retry after 369.415138ms: ssh: handshake failed: read tcp 192.168.39.1:54864->192.168.39.216:22: read: connection reset by peer
	W1027 18:57:10.541619   63277 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54870->192.168.39.216:22: read: connection reset by peer
	I1027 18:57:10.541652   63277 retry.go:31] will retry after 219.162578ms: ssh: handshake failed: read tcp 192.168.39.1:54870->192.168.39.216:22: read: connection reset by peer
	I1027 18:57:10.985109   63277 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1027 18:57:10.985150   63277 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1027 18:57:11.132247   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1027 18:57:11.138615   63277 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1027 18:57:11.138646   63277 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1027 18:57:11.143955   63277 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1027 18:57:11.143981   63277 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1027 18:57:11.155121   63277 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1027 18:57:11.155156   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1027 18:57:11.157384   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1027 18:57:11.170100   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1027 18:57:11.321437   63277 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:11.321472   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1027 18:57:11.329006   63277 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.057632855s)
	I1027 18:57:11.329090   63277 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.053707515s)
	I1027 18:57:11.329177   63277 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 18:57:11.329278   63277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 18:57:11.351194   63277 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1027 18:57:11.351228   63277 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1027 18:57:11.372537   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 18:57:11.394769   63277 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1027 18:57:11.394810   63277 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1027 18:57:11.396018   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1027 18:57:11.456333   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 18:57:11.584380   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1027 18:57:11.712662   63277 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1027 18:57:11.712687   63277 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1027 18:57:11.735201   63277 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1027 18:57:11.735231   63277 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1027 18:57:11.839761   63277 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1027 18:57:11.839788   63277 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1027 18:57:11.900683   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:12.042980   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1027 18:57:12.058451   63277 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1027 18:57:12.058490   63277 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1027 18:57:12.070398   63277 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1027 18:57:12.070429   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1027 18:57:12.354158   63277 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1027 18:57:12.354199   63277 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1027 18:57:12.362109   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1027 18:57:12.365612   63277 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1027 18:57:12.365648   63277 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1027 18:57:12.438920   63277 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1027 18:57:12.438943   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1027 18:57:12.700463   63277 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1027 18:57:12.700490   63277 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1027 18:57:12.700500   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1027 18:57:12.840634   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1027 18:57:12.856064   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1027 18:57:12.902734   63277 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1027 18:57:12.902762   63277 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1027 18:57:13.137669   63277 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1027 18:57:13.137698   63277 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1027 18:57:13.351985   63277 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1027 18:57:13.352016   63277 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1027 18:57:13.596268   63277 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1027 18:57:13.596294   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1027 18:57:13.714853   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.582551362s)
	I1027 18:57:13.850557   63277 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1027 18:57:13.850595   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1027 18:57:14.071067   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1027 18:57:14.389873   63277 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1027 18:57:14.389897   63277 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1027 18:57:14.901480   63277 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1027 18:57:14.901504   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1027 18:57:15.349961   63277 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1027 18:57:15.349990   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1027 18:57:15.716286   63277 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1027 18:57:15.716315   63277 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1027 18:57:16.040523   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1027 18:57:17.129847   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.972419936s)
	I1027 18:57:17.129872   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.959737081s)
	I1027 18:57:17.129940   63277 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.80062985s)
	I1027 18:57:17.129973   63277 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1027 18:57:17.129951   63277 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.800751256s)
	I1027 18:57:17.130902   63277 node_ready.go:35] waiting up to 6m0s for node "addons-864929" to be "Ready" ...
	I1027 18:57:17.155377   63277 node_ready.go:49] node "addons-864929" is "Ready"
	I1027 18:57:17.155425   63277 node_ready.go:38] duration metric: took 24.493356ms for node "addons-864929" to be "Ready" ...
	I1027 18:57:17.155441   63277 api_server.go:52] waiting for apiserver process to appear ...
	I1027 18:57:17.155509   63277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 18:57:17.249988   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.877396986s)
	I1027 18:57:17.250062   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.854018331s)
	I1027 18:57:17.250127   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.793752896s)
	I1027 18:57:17.250185   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.665778222s)
	I1027 18:57:17.686081   63277 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-864929" context rescaled to 1 replicas
	I1027 18:57:17.769830   63277 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1027 18:57:17.773614   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:17.774163   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:17.774193   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:17.774409   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:17.835030   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.934303033s)
	W1027 18:57:17.835104   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:17.835131   63277 retry.go:31] will retry after 292.877887ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:18.055795   63277 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1027 18:57:18.104947   63277 addons.go:238] Setting addon gcp-auth=true in "addons-864929"
	I1027 18:57:18.105010   63277 host.go:66] Checking if "addons-864929" exists ...
	I1027 18:57:18.106942   63277 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1027 18:57:18.109558   63277 main.go:141] libmachine: domain addons-864929 has defined MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:18.110007   63277 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f3:30:05", ip: ""} in network mk-addons-864929: {Iface:virbr1 ExpiryTime:2025-10-27 19:56:40 +0000 UTC Type:0 Mac:52:54:00:f3:30:05 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:addons-864929 Clientid:01:52:54:00:f3:30:05}
	I1027 18:57:18.110059   63277 main.go:141] libmachine: domain addons-864929 has defined IP address 192.168.39.216 and MAC address 52:54:00:f3:30:05 in network mk-addons-864929
	I1027 18:57:18.110215   63277 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/addons-864929/id_rsa Username:docker}
	I1027 18:57:18.128649   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:19.900432   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.538276397s)
	I1027 18:57:19.900485   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.199953129s)
	I1027 18:57:19.900517   63277 addons.go:479] Verifying addon registry=true in "addons-864929"
	I1027 18:57:19.900644   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.059975315s)
	I1027 18:57:19.900669   63277 addons.go:479] Verifying addon metrics-server=true in "addons-864929"
	I1027 18:57:19.900741   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.044632307s)
	I1027 18:57:19.902352   63277 out.go:179] * Verifying registry addon...
	I1027 18:57:19.902350   63277 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-864929 service yakd-dashboard -n yakd-dashboard
	
	I1027 18:57:19.903449   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.860430632s)
	I1027 18:57:19.903482   63277 addons.go:479] Verifying addon ingress=true in "addons-864929"
	I1027 18:57:19.905028   63277 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1027 18:57:19.905292   63277 out.go:179] * Verifying ingress addon...
	I1027 18:57:19.907320   63277 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1027 18:57:19.958238   63277 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1027 18:57:19.958265   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:19.958292   63277 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1027 18:57:19.958311   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:20.433182   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:20.434585   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:20.543543   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.472423735s)
	W1027 18:57:20.543599   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1027 18:57:20.543627   63277 retry.go:31] will retry after 255.689771ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1027 18:57:20.800094   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1027 18:57:20.922952   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:20.923554   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:21.442922   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:21.442981   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:21.773578   63277 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.618035898s)
	I1027 18:57:21.773618   63277 api_server.go:72] duration metric: took 11.502220917s to wait for apiserver process to appear ...
	I1027 18:57:21.773628   63277 api_server.go:88] waiting for apiserver healthz status ...
	I1027 18:57:21.773654   63277 api_server.go:253] Checking apiserver healthz at https://192.168.39.216:8443/healthz ...
	I1027 18:57:21.774535   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.733957112s)
	I1027 18:57:21.774578   63277 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-864929"
	I1027 18:57:21.776672   63277 out.go:179] * Verifying csi-hostpath-driver addon...
	I1027 18:57:21.779451   63277 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1027 18:57:21.792875   63277 api_server.go:279] https://192.168.39.216:8443/healthz returned 200:
	ok
	I1027 18:57:21.806882   63277 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1027 18:57:21.806906   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:21.811185   63277 api_server.go:141] control plane version: v1.34.1
	I1027 18:57:21.811218   63277 api_server.go:131] duration metric: took 37.583056ms to wait for apiserver health ...
	I1027 18:57:21.811241   63277 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 18:57:21.837867   63277 system_pods.go:59] 20 kube-system pods found
	I1027 18:57:21.837924   63277 system_pods.go:61] "amd-gpu-device-plugin-zg4tw" [26b73888-1e70-456d-ab70-4392ce52af26] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1027 18:57:21.837935   63277 system_pods.go:61] "coredns-66bc5c9577-5v77t" [13dc8b33-a53f-4df7-8cea-be41471727fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 18:57:21.837946   63277 system_pods.go:61] "coredns-66bc5c9577-f8dfl" [7ada2d5f-c124-4130-8e4d-f5f6f0d2b856] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 18:57:21.837954   63277 system_pods.go:61] "csi-hostpath-attacher-0" [923a1bc3-3658-4c92-9ac7-f6fb7cc49fdc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 18:57:21.837960   63277 system_pods.go:61] "csi-hostpath-resizer-0" [2d2edb44-d6fd-41c7-aebc-45f7051be9b9] Pending
	I1027 18:57:21.837970   63277 system_pods.go:61] "csi-hostpathplugin-2kk6q" [4df09867-d21a-494d-b1c1-b33d1ae05292] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 18:57:21.837976   63277 system_pods.go:61] "etcd-addons-864929" [0423c9dd-5674-4e91-be68-a3255c87fce6] Running
	I1027 18:57:21.837982   63277 system_pods.go:61] "kube-apiserver-addons-864929" [b43be527-80f0-4d18-8362-54d51f1f3a19] Running
	I1027 18:57:21.837987   63277 system_pods.go:61] "kube-controller-manager-addons-864929" [f65a9a0f-0799-4414-87de-291236ac723d] Running
	I1027 18:57:21.837995   63277 system_pods.go:61] "kube-ingress-dns-minikube" [66c0967e-2aba-46db-9b8d-50afb9e508c4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 18:57:21.838001   63277 system_pods.go:61] "kube-proxy-5grdt" [73ab29d4-f3af-4942-87b0-5b146ec49fd2] Running
	I1027 18:57:21.838010   63277 system_pods.go:61] "kube-scheduler-addons-864929" [ac2cfd72-7a4b-46a5-b8fc-d1b7552feb30] Running
	I1027 18:57:21.838017   63277 system_pods.go:61] "metrics-server-85b7d694d7-7z96j" [332bcd8d-855b-409e-8a4c-c788da3ed019] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 18:57:21.838026   63277 system_pods.go:61] "nvidia-device-plugin-daemonset-dq69s" [7048c489-be31-4c98-a8ea-455c9506a937] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 18:57:21.838050   63277 system_pods.go:61] "registry-6b586f9694-wrthd" [cfcc8422-d46c-42b9-a799-37210505af96] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 18:57:21.838063   63277 system_pods.go:61] "registry-creds-764b6fb674-g7z85" [b7d5c5d1-64ba-4adf-b61a-42be8e53ce7e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 18:57:21.838072   63277 system_pods.go:61] "registry-proxy-6grgg" [80e2894b-b354-44d6-8c93-8c9a8f5ec644] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1027 18:57:21.838085   63277 system_pods.go:61] "snapshot-controller-7d9fbc56b8-9nfvf" [e133be4d-c9ac-45ee-8523-3197eb5ae1dd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:57:21.838099   63277 system_pods.go:61] "snapshot-controller-7d9fbc56b8-t78cg" [07e1f13e-a7d4-496f-9f63-f96306459e61] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:57:21.838111   63277 system_pods.go:61] "storage-provisioner" [1ec5b960-2f51-438a-9968-46e1bea6ddc7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 18:57:21.838126   63277 system_pods.go:74] duration metric: took 26.872544ms to wait for pod list to return data ...
	I1027 18:57:21.838141   63277 default_sa.go:34] waiting for default service account to be created ...
	I1027 18:57:21.867654   63277 default_sa.go:45] found service account: "default"
	I1027 18:57:21.867680   63277 default_sa.go:55] duration metric: took 29.532579ms for default service account to be created ...
	I1027 18:57:21.867689   63277 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 18:57:21.883210   63277 system_pods.go:86] 20 kube-system pods found
	I1027 18:57:21.883247   63277 system_pods.go:89] "amd-gpu-device-plugin-zg4tw" [26b73888-1e70-456d-ab70-4392ce52af26] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1027 18:57:21.883257   63277 system_pods.go:89] "coredns-66bc5c9577-5v77t" [13dc8b33-a53f-4df7-8cea-be41471727fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 18:57:21.883266   63277 system_pods.go:89] "coredns-66bc5c9577-f8dfl" [7ada2d5f-c124-4130-8e4d-f5f6f0d2b856] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 18:57:21.883272   63277 system_pods.go:89] "csi-hostpath-attacher-0" [923a1bc3-3658-4c92-9ac7-f6fb7cc49fdc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 18:57:21.883278   63277 system_pods.go:89] "csi-hostpath-resizer-0" [2d2edb44-d6fd-41c7-aebc-45f7051be9b9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1027 18:57:21.883294   63277 system_pods.go:89] "csi-hostpathplugin-2kk6q" [4df09867-d21a-494d-b1c1-b33d1ae05292] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 18:57:21.883301   63277 system_pods.go:89] "etcd-addons-864929" [0423c9dd-5674-4e91-be68-a3255c87fce6] Running
	I1027 18:57:21.883308   63277 system_pods.go:89] "kube-apiserver-addons-864929" [b43be527-80f0-4d18-8362-54d51f1f3a19] Running
	I1027 18:57:21.883313   63277 system_pods.go:89] "kube-controller-manager-addons-864929" [f65a9a0f-0799-4414-87de-291236ac723d] Running
	I1027 18:57:21.883326   63277 system_pods.go:89] "kube-ingress-dns-minikube" [66c0967e-2aba-46db-9b8d-50afb9e508c4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 18:57:21.883331   63277 system_pods.go:89] "kube-proxy-5grdt" [73ab29d4-f3af-4942-87b0-5b146ec49fd2] Running
	I1027 18:57:21.883339   63277 system_pods.go:89] "kube-scheduler-addons-864929" [ac2cfd72-7a4b-46a5-b8fc-d1b7552feb30] Running
	I1027 18:57:21.883347   63277 system_pods.go:89] "metrics-server-85b7d694d7-7z96j" [332bcd8d-855b-409e-8a4c-c788da3ed019] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 18:57:21.883358   63277 system_pods.go:89] "nvidia-device-plugin-daemonset-dq69s" [7048c489-be31-4c98-a8ea-455c9506a937] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 18:57:21.883365   63277 system_pods.go:89] "registry-6b586f9694-wrthd" [cfcc8422-d46c-42b9-a799-37210505af96] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 18:57:21.883372   63277 system_pods.go:89] "registry-creds-764b6fb674-g7z85" [b7d5c5d1-64ba-4adf-b61a-42be8e53ce7e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 18:57:21.883378   63277 system_pods.go:89] "registry-proxy-6grgg" [80e2894b-b354-44d6-8c93-8c9a8f5ec644] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1027 18:57:21.883383   63277 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9nfvf" [e133be4d-c9ac-45ee-8523-3197eb5ae1dd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:57:21.883388   63277 system_pods.go:89] "snapshot-controller-7d9fbc56b8-t78cg" [07e1f13e-a7d4-496f-9f63-f96306459e61] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:57:21.883393   63277 system_pods.go:89] "storage-provisioner" [1ec5b960-2f51-438a-9968-46e1bea6ddc7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 18:57:21.883404   63277 system_pods.go:126] duration metric: took 15.70908ms to wait for k8s-apps to be running ...
	I1027 18:57:21.883416   63277 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 18:57:21.883474   63277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 18:57:21.924022   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:21.927212   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:22.158899   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.03020142s)
	I1027 18:57:22.158954   63277 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.051987547s)
	W1027 18:57:22.158980   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:22.159006   63277 retry.go:31] will retry after 279.686083ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:22.160959   63277 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 18:57:22.162547   63277 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1027 18:57:22.164115   63277 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1027 18:57:22.164141   63277 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1027 18:57:22.261201   63277 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1027 18:57:22.261230   63277 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1027 18:57:22.288886   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:22.352572   63277 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1027 18:57:22.352609   63277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1027 18:57:22.439692   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:22.441909   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1027 18:57:22.481468   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:22.481666   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:22.788128   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:22.914985   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:22.915276   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:23.285377   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:23.418349   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:23.418666   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:23.583239   63277 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.699734966s)
	I1027 18:57:23.583281   63277 system_svc.go:56] duration metric: took 1.699860035s WaitForService to wait for kubelet
	I1027 18:57:23.583292   63277 kubeadm.go:586] duration metric: took 13.311893893s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 18:57:23.583319   63277 node_conditions.go:102] verifying NodePressure condition ...
	I1027 18:57:23.583423   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.783267207s)
	I1027 18:57:23.593344   63277 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1027 18:57:23.593372   63277 node_conditions.go:123] node cpu capacity is 2
	I1027 18:57:23.593391   63277 node_conditions.go:105] duration metric: took 10.067491ms to run NodePressure ...
	I1027 18:57:23.593404   63277 start.go:241] waiting for startup goroutines ...
	I1027 18:57:23.787519   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:23.924794   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:23.924888   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:24.290306   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.848359661s)
	I1027 18:57:24.291626   63277 addons.go:479] Verifying addon gcp-auth=true in "addons-864929"
	I1027 18:57:24.294508   63277 out.go:179] * Verifying gcp-auth addon...
	I1027 18:57:24.296641   63277 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1027 18:57:24.328761   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:24.328910   63277 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1027 18:57:24.328951   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:24.413802   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:24.416333   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:24.786549   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:24.805212   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:24.915701   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:24.921802   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:25.061422   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.621678381s)
	W1027 18:57:25.061478   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:25.061503   63277 retry.go:31] will retry after 804.946825ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:25.289162   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:25.301160   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:25.421590   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:25.423412   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:25.785953   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:25.802888   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:25.867047   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:25.919138   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:25.919440   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:26.286933   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:26.301794   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:26.417105   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:26.417267   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:26.785587   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:26.804169   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:26.908637   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:26.912996   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:27.288028   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:27.300864   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:27.412910   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:27.416533   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:27.456859   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.589752651s)
	W1027 18:57:27.456908   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:27.456932   63277 retry.go:31] will retry after 685.459936ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:27.784840   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:27.801850   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:27.910590   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:27.912874   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:28.143005   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:28.285631   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:28.300220   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:28.419303   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:28.422363   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:28.784623   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:28.802401   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:28.911601   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:28.915428   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:29.283493   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:29.300718   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:29.364540   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.221494949s)
	W1027 18:57:29.364577   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:29.364611   63277 retry.go:31] will retry after 1.757799431s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:29.416322   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:29.418953   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:29.787868   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:29.799055   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:29.910571   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:29.914273   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:30.286180   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:30.303999   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:30.413104   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:30.416370   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:30.787744   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:30.803419   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:30.916360   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:30.919438   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:31.122558   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:31.285676   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:31.301308   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:31.411868   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:31.412485   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:31.787290   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:31.802700   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:31.913644   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:31.915831   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:32.286432   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:32.304334   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:32.374445   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.251833687s)
	W1027 18:57:32.374511   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:32.374541   63277 retry.go:31] will retry after 2.78595925s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:32.416811   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:32.416913   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:32.785363   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:32.804140   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:32.915420   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:32.916567   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:33.292316   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:33.303111   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:33.464111   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:33.464335   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:33.784707   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:33.803523   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:33.909242   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:33.911455   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:34.303435   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:34.303506   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:34.413609   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:34.417021   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:34.784372   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:34.802229   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:34.911142   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:34.916104   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:35.161393   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:35.283283   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:35.301025   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:35.410195   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:35.416262   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:36.146770   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:36.157278   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:36.158333   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:36.158723   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:36.286639   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:36.300897   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:36.418783   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:36.423389   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:36.618778   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.457337067s)
	W1027 18:57:36.618824   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:36.618849   63277 retry.go:31] will retry after 2.808126494s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:36.785856   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:36.800053   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:36.911223   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:36.913610   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:37.283520   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:37.300915   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:37.411384   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:37.411564   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:37.783128   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:37.801353   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:37.908775   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:37.911143   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:38.284488   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:38.302812   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:38.423418   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:38.423531   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:38.784017   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:38.800264   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:38.911392   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:38.912809   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:39.284702   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:39.302513   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:39.414232   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:39.414350   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:39.427461   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:39.837291   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:39.837565   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:39.910765   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:39.914552   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:40.287903   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:40.301760   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:40.416079   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:40.416206   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:40.448955   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.021449854s)
	W1027 18:57:40.449007   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:40.449046   63277 retry.go:31] will retry after 2.389005779s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:40.785654   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:40.802757   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:40.913550   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:40.914781   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:41.286164   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:41.300417   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:41.408904   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:41.411315   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:41.783667   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:41.801000   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:41.911341   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:41.911526   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:42.283379   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:42.300298   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:42.413464   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:42.413759   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:42.784936   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:42.801747   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:42.838978   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:42.914433   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:42.915753   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:43.284491   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:43.306054   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:43.410133   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:43.414779   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:43.787454   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:43.802514   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:43.914613   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:43.915563   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:44.044025   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.205001809s)
	W1027 18:57:44.044086   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:44.044113   63277 retry.go:31] will retry after 6.569226607s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:44.286635   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:44.301882   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:44.420149   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:44.420239   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:44.786772   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:44.801152   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:44.907893   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:44.912659   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:45.282844   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:45.299210   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:45.408847   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:45.415564   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:45.785932   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:45.799703   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:45.910796   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:45.912722   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:46.284380   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:46.300262   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:46.411586   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:46.413618   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:46.785774   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:46.802487   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:46.909401   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:46.911157   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:47.285427   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:47.301018   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:47.411570   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:47.415374   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:47.784426   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:47.800958   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:47.909404   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:47.911321   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:48.285898   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:48.301526   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:48.409153   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:48.420016   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:48.784072   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:48.799905   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:48.910147   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:48.911420   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:49.283552   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:49.301303   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:49.413410   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:49.413468   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:49.785136   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:49.803428   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:49.912135   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:49.918025   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:50.284843   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:50.300698   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:50.417847   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:50.418870   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:50.614173   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:50.785558   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:50.803089   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:50.912911   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:50.914476   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:51.285211   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:51.299828   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:51.410597   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:51.417162   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:51.760476   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.146250047s)
	W1027 18:57:51.760537   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:51.760566   63277 retry.go:31] will retry after 8.458351618s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:51.788367   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:51.802674   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:51.912952   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:51.915907   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:52.284979   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:52.302620   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:52.417553   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:52.422725   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:52.785476   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:52.801653   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:52.911126   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:52.911882   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:53.286067   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:53.300801   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:53.418960   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:53.420629   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:53.851794   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:53.853714   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:53.922918   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:53.923746   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:54.287898   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:54.302372   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:54.425848   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:54.426641   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:54.792214   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:54.801130   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:54.915252   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:54.915642   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:55.283583   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:55.304005   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:55.408097   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:55.413323   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:55.784488   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:55.806326   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:55.913127   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:55.915413   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:56.427055   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:56.427252   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:56.427310   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:56.428375   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:56.787593   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:56.888446   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:56.912008   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:56.913074   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:57.288183   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:57.305878   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:57.417164   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:57.418270   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:57.784210   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:57.802894   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:57.909720   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:57.912051   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:58.285258   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:58.300454   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:58.412828   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:58.414479   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:58.784411   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:58.801492   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:58.911089   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:58.912058   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:59.283993   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:59.299989   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:59.412668   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:59.419029   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:59.784705   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:59.804623   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:59.909691   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:59.912501   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:00.220065   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:58:00.284147   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:00.302108   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:00.416685   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:00.418642   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:00.786304   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:00.803095   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:00.911931   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:00.915399   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:01.286093   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:01.301584   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:01.412443   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:01.414896   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:01.458011   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.237894856s)
	W1027 18:58:01.458080   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:01.458103   63277 retry.go:31] will retry after 16.405228739s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:01.784222   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:01.803092   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:01.908661   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:01.910814   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:02.284729   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:02.302770   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:02.414874   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:02.414965   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:02.789864   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:02.800637   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:02.914649   63277 kapi.go:107] duration metric: took 43.009618954s to wait for kubernetes.io/minikube-addons=registry ...
	I1027 18:58:02.914893   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:03.286072   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:03.299857   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:03.418386   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:03.791799   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:03.803302   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:03.914538   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:04.286257   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:04.302605   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:04.416367   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:04.783206   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:04.867278   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:04.911899   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:05.285072   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:05.300843   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:05.414023   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:05.785545   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:05.803246   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:05.924390   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:06.284685   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:06.301604   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:06.415639   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:06.786150   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:06.886295   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:06.912165   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:07.284913   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:07.302714   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:07.412538   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:07.787904   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:07.801832   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:07.911724   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:08.282968   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:08.300993   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:08.414821   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:08.786690   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:08.803923   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:08.911877   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:09.297222   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:09.301996   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:09.422572   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:09.788150   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:09.805824   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:09.913774   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:10.293390   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:10.305508   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:10.420862   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:10.792615   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:10.802761   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:10.912280   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:11.288594   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:11.306089   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:11.417798   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:11.787690   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:11.802673   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:11.912590   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:12.284220   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:12.308323   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:12.414975   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:12.787839   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:12.800833   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:12.915221   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:13.540620   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:13.543249   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:13.543347   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:13.788031   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:13.805504   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:13.912643   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:14.288515   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:14.303121   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:14.425413   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:14.786082   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:14.800338   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:14.911089   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:15.290704   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:15.300954   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:15.415781   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:15.785268   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:15.801079   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:15.914809   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:16.284643   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:16.301478   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:16.425519   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:16.783788   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:16.802402   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:16.916061   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:17.289294   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:17.307167   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:17.426377   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:17.784384   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:17.800170   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:17.864299   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:58:17.913670   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:18.286332   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:18.302108   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:18.413514   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:18.786024   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:18.802816   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:18.911079   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:19.285445   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:19.389432   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:19.439230   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.57487824s)
	W1027 18:58:19.439294   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:19.439322   63277 retry.go:31] will retry after 19.626476762s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:19.486856   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:19.786120   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:19.806643   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:19.910901   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:20.287756   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:20.302427   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:20.418486   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:20.783960   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:20.800528   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:20.913267   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:21.285594   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:21.302211   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:21.420494   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:21.786759   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:21.804159   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:21.912377   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:22.283620   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:22.301149   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:22.427642   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:22.783574   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:22.802410   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:22.914836   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:23.288209   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:23.303010   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:23.421096   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:23.789207   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:23.808143   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:23.911641   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:24.286064   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:24.303547   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:24.425719   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:24.792130   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:24.801495   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:24.913750   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:25.289935   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:25.305864   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:25.432159   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:25.784691   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:25.803435   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:25.912224   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:26.285500   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:26.301355   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:26.418759   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:26.785783   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:26.810515   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:26.912606   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:27.284842   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:27.300596   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:27.415566   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:27.787354   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:27.800995   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:27.912310   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:28.284479   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:28.303281   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:28.419682   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:28.789550   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:28.800133   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:28.915291   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:29.288142   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:29.302992   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:29.418531   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:29.785066   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:29.800998   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:29.911612   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:30.287335   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:30.300823   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:30.414607   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:30.785353   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:30.801683   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:30.914771   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:31.286892   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:31.309512   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:31.413660   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:31.784745   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:31.804007   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:31.914073   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:32.285574   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:32.302369   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:32.415432   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:32.787607   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:32.801278   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:32.912924   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:33.286454   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:33.300583   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:33.413776   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:33.790802   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:33.808782   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:33.912972   63277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:34.286709   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:34.304110   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:34.420826   63277 kapi.go:107] duration metric: took 1m14.513497503s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1027 18:58:34.786102   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:34.801992   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:35.285498   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:35.301550   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:35.784165   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:35.800807   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:36.284911   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:36.299796   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:36.788910   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:36.804143   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:37.284496   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:37.302139   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:37.785508   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:37.802879   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:38.286869   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:38.300852   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:38.786222   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:38.804588   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:39.066915   63277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:58:39.318253   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:39.410241   63277 kapi.go:107] duration metric: took 1m15.113592039s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1027 18:58:39.412086   63277 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-864929 cluster.
	I1027 18:58:39.413383   63277 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1027 18:58:39.414377   63277 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1027 18:58:39.785506   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:40.146885   63277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.0799187s)
	W1027 18:58:40.146963   63277 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1027 18:58:40.147096   63277 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1027 18:58:40.287330   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:40.782964   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:41.285147   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:41.783255   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:42.286213   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:42.785272   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:43.282878   63277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:43.789437   63277 kapi.go:107] duration metric: took 1m22.009986905s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1027 18:58:43.791464   63277 out.go:179] * Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, storage-provisioner, amd-gpu-device-plugin, registry-creds, default-storageclass, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1027 18:58:43.792829   63277 addons.go:514] duration metric: took 1m33.521403387s for enable addons: enabled=[nvidia-device-plugin ingress-dns cloud-spanner storage-provisioner amd-gpu-device-plugin registry-creds default-storageclass metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1027 18:58:43.792875   63277 start.go:246] waiting for cluster config update ...
	I1027 18:58:43.792913   63277 start.go:255] writing updated cluster config ...
	I1027 18:58:43.793226   63277 ssh_runner.go:195] Run: rm -f paused
	I1027 18:58:43.802235   63277 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 18:58:43.806653   63277 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-f8dfl" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:43.812431   63277 pod_ready.go:94] pod "coredns-66bc5c9577-f8dfl" is "Ready"
	I1027 18:58:43.812452   63277 pod_ready.go:86] duration metric: took 5.764753ms for pod "coredns-66bc5c9577-f8dfl" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:43.816160   63277 pod_ready.go:83] waiting for pod "etcd-addons-864929" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:43.821965   63277 pod_ready.go:94] pod "etcd-addons-864929" is "Ready"
	I1027 18:58:43.821993   63277 pod_ready.go:86] duration metric: took 5.807724ms for pod "etcd-addons-864929" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:43.824005   63277 pod_ready.go:83] waiting for pod "kube-apiserver-addons-864929" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:43.828898   63277 pod_ready.go:94] pod "kube-apiserver-addons-864929" is "Ready"
	I1027 18:58:43.828923   63277 pod_ready.go:86] duration metric: took 4.897075ms for pod "kube-apiserver-addons-864929" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:43.830643   63277 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-864929" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:44.207152   63277 pod_ready.go:94] pod "kube-controller-manager-addons-864929" is "Ready"
	I1027 18:58:44.207194   63277 pod_ready.go:86] duration metric: took 376.531709ms for pod "kube-controller-manager-addons-864929" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:44.415720   63277 pod_ready.go:83] waiting for pod "kube-proxy-5grdt" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:44.807579   63277 pod_ready.go:94] pod "kube-proxy-5grdt" is "Ready"
	I1027 18:58:44.807611   63277 pod_ready.go:86] duration metric: took 391.860267ms for pod "kube-proxy-5grdt" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:45.008299   63277 pod_ready.go:83] waiting for pod "kube-scheduler-addons-864929" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:45.409571   63277 pod_ready.go:94] pod "kube-scheduler-addons-864929" is "Ready"
	I1027 18:58:45.409599   63277 pod_ready.go:86] duration metric: took 401.265666ms for pod "kube-scheduler-addons-864929" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:45.409611   63277 pod_ready.go:40] duration metric: took 1.607328787s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 18:58:45.455187   63277 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 18:58:45.457073   63277 out.go:179] * Done! kubectl is now configured to use "addons-864929" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 19:02:26 addons-864929 crio[816]: time="2025-10-27 19:02:26.541900686Z" level=debug msg="Looking for TLS certificates and private keys in /etc/docker/certs.d/docker.io" file="tlsclientconfig/tlsclientconfig.go:20"
	Oct 27 19:02:26 addons-864929 crio[816]: time="2025-10-27 19:02:26.542345056Z" level=debug msg="GET https://registry-1.docker.io/v2/" file="docker/docker_client.go:631"
	Oct 27 19:02:26 addons-864929 crio[816]: time="2025-10-27 19:02:26.554321788Z" level=debug msg="Ping https://registry-1.docker.io/v2/ status 401" file="docker/docker_client.go:901"
	Oct 27 19:02:26 addons-864929 crio[816]: time="2025-10-27 19:02:26.555002089Z" level=debug msg="GET https://auth.docker.io/token?scope=repository%3Alibrary%2Fbusybox%3Apull&service=registry.docker.io" file="docker/docker_client.go:861"
	Oct 27 19:02:26 addons-864929 crio[816]: time="2025-10-27 19:02:26.573493736Z" level=debug msg="GET https://registry-1.docker.io/v2/library/busybox/manifests/stable" file="docker/docker_client.go:631"
	Oct 27 19:02:26 addons-864929 crio[816]: time="2025-10-27 19:02:26.579053548Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=0bff4e72-4541-4279-847f-70c86e2f5c95 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 27 19:02:26 addons-864929 crio[816]: time="2025-10-27 19:02:26.579187116Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=58bc1a39-2999-4c71-b4a7-9231f1bd56b2 name=/runtime.v1.RuntimeService/Version
	Oct 27 19:02:26 addons-864929 crio[816]: time="2025-10-27 19:02:26.579761571Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=58bc1a39-2999-4c71-b4a7-9231f1bd56b2 name=/runtime.v1.RuntimeService/Version
	Oct 27 19:02:26 addons-864929 crio[816]: time="2025-10-27 19:02:26.580258593Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:750f263ef26428763cf6b0e145e880c9cc04be8f3139c343ec49c6b28652ca7e,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d498dc89-wmhrh,Uid:ce19a12f-43e8-4993-a64c-ef90bd25467c,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591697565205855,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d498dc89-wmhrh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ce19a12f-43e8-4993-a64c-ef90bd25467c,pod-template-hash: 5d498dc89,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T19:01:37.245283681Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c16d36466f25d5b4ba3c3eff1817f9578774b155cbb1d756d783ab5c93bdd8c8,Metadata:&PodSandboxMetadata{Name:task-pv-pod,Uid:504b682e-4d7e-4f98-913e-efaa9ccfd4a1,Namespace:default,Attempt:
0,},State:SANDBOX_READY,CreatedAt:1761591571893064553,Labels:map[string]string{app: task-pv-pod,io.kubernetes.container.name: POD,io.kubernetes.pod.name: task-pv-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 504b682e-4d7e-4f98-913e-efaa9ccfd4a1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T18:59:31.573804320Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:570f94482126cb74d7ad40a4629c41914e618364f2b2d8303dde39e5aec6705e,Metadata:&PodSandboxMetadata{Name:test-local-path,Uid:4d1f2112-b21d-4876-abde-84c8de8078a0,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591565602023786,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4d1f2112-b21d-4876-abde-84c8de8078a0,run: test-local-path,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\
":{},\"labels\":{\"run\":\"test-local-path\"},\"name\":\"test-local-path\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"command\":[\"sh\",\"-c\",\"echo 'local-path-provisioner' \\u003e /test/file1\"],\"image\":\"busybox:stable\",\"name\":\"busybox\",\"volumeMounts\":[{\"mountPath\":\"/test\",\"name\":\"data\"}]}],\"restartPolicy\":\"OnFailure\",\"volumes\":[{\"name\":\"data\",\"persistentVolumeClaim\":{\"claimName\":\"test-pvc\"}}]}}\n,kubernetes.io/config.seen: 2025-10-27T18:59:25.280938920Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0e930ac960395a1fe60ce33b3d0d23e5074c5bcf2cfcf870738b45425fc094f3,Metadata:&PodSandboxMetadata{Name:nginx,Uid:9e5f3a97-dcd1-44e6-920b-2953ee6ba066,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591552620410485,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e5f3a97-dcd1-44e6-920b-2953ee6ba066,run: nginx,},Annotations:map[string
]string{kubernetes.io/config.seen: 2025-10-27T18:59:12.265024398Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2e4a1f88f6c72c5d32f4d9fa16c7245440698c2e6c6940465c848ea8e3c1de72,Metadata:&PodSandboxMetadata{Name:busybox,Uid:a668ad58-4082-4722-a352-3bd62c30df9b,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591526377152878,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a668ad58-4082-4722-a352-3bd62c30df9b,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T18:58:46.053221959Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&PodSandboxMetadata{Name:csi-hostpathplugin-2kk6q,Uid:4df09867-d21a-494d-b1c1-b33d1ae05292,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591444549557083,Labels:map[string]string{addonmanager.kub
ernetes.io/mode: Reconcile,app.kubernetes.io/component: plugin,app.kubernetes.io/instance: hostpath.csi.k8s.io,app.kubernetes.io/name: csi-hostpathplugin,app.kubernetes.io/part-of: csi-driver-host-path,controller-revision-hash: bfd669d76,io.kubernetes.container.name: POD,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,kubernetes.io/minikube-addons: csi-hostpath-driver,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T18:57:21.559716524Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d3fe0c8c9df1bf22576a6f62d4487ebe483778329f044aaf12442f36aefee1c9,Metadata:&PodSandboxMetadata{Name:csi-hostpath-resizer-0,Uid:2d2edb44-d6fd-41c7-aebc-45f7051be9b9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591443520871953,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,app.kubernetes.io/name: csi-hostpath-resizer,apps.k
ubernetes.io/pod-index: 0,controller-revision-hash: csi-hostpath-resizer-5f4978ffc6,io.kubernetes.container.name: POD,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2edb44-d6fd-41c7-aebc-45f7051be9b9,kubernetes.io/minikube-addons: csi-hostpath-driver,statefulset.kubernetes.io/pod-name: csi-hostpath-resizer-0,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T18:57:21.778463177Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6429ac3aeaf4cf12d6b687f73be67ce1eb08e0da208ef1272ae3514a20ed0c84,Metadata:&PodSandboxMetadata{Name:csi-hostpath-attacher-0,Uid:923a1bc3-3658-4c92-9ac7-f6fb7cc49fdc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591443518217408,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,app.kubernetes.io/name: csi-hostpath-attacher,apps.kubernetes.io/pod-index: 0,controller-revision-hash: csi-hostpath-attacher-576bccf57,io.kubernetes.container.name: POD,io.k
ubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 923a1bc3-3658-4c92-9ac7-f6fb7cc49fdc,kubernetes.io/minikube-addons: csi-hostpath-driver,statefulset.kubernetes.io/pod-name: csi-hostpath-attacher-0,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T18:57:21.415845940Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6813db443ad42f30ede5948e121225fd273a9767846892f5abfd1c7e67717754,Metadata:&PodSandboxMetadata{Name:snapshot-controller-7d9fbc56b8-9nfvf,Uid:e133be4d-c9ac-45ee-8523-3197eb5ae1dd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591442796934443,Labels:map[string]string{app: snapshot-controller,io.kubernetes.container.name: POD,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-9nfvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e133be4d-c9ac-45ee-8523-3197eb5ae1dd,pod-template-hash: 7d9fbc56b8,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T18:
57:20.574861188Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:41f0fc88d88a67c53a3bd864e17466349b4c7cd2b1545a854eefac2ca9cec7a5,Metadata:&PodSandboxMetadata{Name:snapshot-controller-7d9fbc56b8-t78cg,Uid:07e1f13e-a7d4-496f-9f63-f96306459e61,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591442124536348,Labels:map[string]string{app: snapshot-controller,io.kubernetes.container.name: POD,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-t78cg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07e1f13e-a7d4-496f-9f63-f96306459e61,pod-template-hash: 7d9fbc56b8,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T18:57:20.625280940Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bc410cbb40cb5902da6bdf34d8d5242293447cf15cac9039be7ba9684081f6aa,Metadata:&PodSandboxMetadata{Name:local-path-provisioner-648f6765c9-xhp4p,Uid:aca554db-371c-4aad-9edb-8724e17ed917,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_READY,Cre
atedAt:1761591440613157096,Labels:map[string]string{app: local-path-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-xhp4p,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: aca554db-371c-4aad-9edb-8724e17ed917,pod-template-hash: 648f6765c9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T18:57:17.984893463Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a34c89c3d97f4534833c02b1092cbe693acbc1d81b74de61458ce121608460c7,Metadata:&PodSandboxMetadata{Name:gadget-5bx7q,Uid:ef4b0394-4dee-4b23-bee8-0787117f056f,Namespace:gadget,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591439173395101,Labels:map[string]string{controller-revision-hash: d797fcb64,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gadget-5bx7q,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: ef4b0394-4dee-4b23-bee8-0787117f056f,k8s-app: gadget,pod-template-generation: 1,},Annotations:map[string]string{container.appar
mor.security.beta.kubernetes.io/gadget: unconfined,kubernetes.io/config.seen: 2025-10-27T18:57:18.154239705Z,kubernetes.io/config.source: api,prometheus.io/path: /metrics,prometheus.io/port: 2223,prometheus.io/scrape: true,},RuntimeHandler:,},&PodSandbox{Id:d0de4be78d27d9e94647775771a19ac1751580111ae3739d05c71953c1faf14a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1ec5b960-2f51-438a-9968-46e1bea6ddc7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591438118533974,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec5b960-2f51-438a-9968-46e1bea6ddc7,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\"
:\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-10-27T18:57:17.233514543Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:eb20897f30dfb36e8f3c34ea19074dcb418adfec59e5f0b0a7e7d7001d52924b,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-zg4tw,Uid:26b73888-1e70-456d-ab70-4392ce52af26,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591434330085879,Labels:map[string]string{controller-revision-hash: 7f87d6fd8d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plug
in-zg4tw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26b73888-1e70-456d-ab70-4392ce52af26,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T18:57:13.971229357Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1549458dc06ee22d63cae83ec65fb1b67f7fe3dd07b0cb035e9908c6a184cd2d,Metadata:&PodSandboxMetadata{Name:kube-proxy-5grdt,Uid:73ab29d4-f3af-4942-87b0-5b146ec49fd2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591430883415022,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-5grdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ab29d4-f3af-4942-87b0-5b146ec49fd2,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T18:57:09.924411526Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&
PodSandbox{Id:a87aa3850ab80908c43af3f2bbb9eca022489f0530ec2b8899475a9ac087e88d,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-f8dfl,Uid:7ada2d5f-c124-4130-8e4d-f5f6f0d2b856,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591430862291960,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-f8dfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ada2d5f-c124-4130-8e4d-f5f6f0d2b856,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T18:57:10.498142658Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:af045b669200a98e83828b7038b0ba1371f3f501d38a1aaf2a24eaffe8481851,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-864929,Uid:4738620b04d3027787daeded7d8de7c7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591418359487933,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name
: POD,io.kubernetes.pod.name: kube-controller-manager-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4738620b04d3027787daeded7d8de7c7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4738620b04d3027787daeded7d8de7c7,kubernetes.io/config.seen: 2025-10-27T18:56:57.270034320Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:34da1388827880873125688a0ac800d701cef134bf76ff2b7101d97c3570ac69,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-864929,Uid:8f4246ed8c9b2f11e40ac4ed620904b3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591418358937051,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4246ed8c9b2f11e40ac4ed620904b3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8f4246ed8c9b2f11e40ac4ed620904b3,kubernetes.io/config.see
n: 2025-10-27T18:56:57.270044804Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c5570e67c7a56294b118428e679ff8b66f3a3e9b719b89e2b9dfb87dfa3f95f5,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-864929,Uid:2de27a2c807a456567dcafd8f96dd732,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591418356553066,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de27a2c807a456567dcafd8f96dd732,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.216:8443,kubernetes.io/config.hash: 2de27a2c807a456567dcafd8f96dd732,kubernetes.io/config.seen: 2025-10-27T18:56:57.270032461Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d582ed9677d49ebcc2ef56ec9d4db2cd633d5a4f0d9dbfb7d9840888bee96671,Metadata:&PodSandboxMetadata{Name:etcd-addons-86
4929,Uid:853670e29e0053cd2968e4d42e8dcd57,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761591418347207777,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853670e29e0053cd2968e4d42e8dcd57,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.216:2379,kubernetes.io/config.hash: 853670e29e0053cd2968e4d42e8dcd57,kubernetes.io/config.seen: 2025-10-27T18:56:57.270026443Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=0bff4e72-4541-4279-847f-70c86e2f5c95 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 27 19:02:26 addons-864929 crio[816]: time="2025-10-27 19:02:26.580761074Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0eaebe82-22f9-4bea-a8e4-fd79941be48d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:02:26 addons-864929 crio[816]: time="2025-10-27 19:02:26.582103254Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761591746582078531,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:552224,},InodesUsed:&UInt64Value{Value:191,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0eaebe82-22f9-4bea-a8e4-fd79941be48d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:02:26 addons-864929 crio[816]: time="2025-10-27 19:02:26.582666058Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3322452c-9312-48f8-9fcd-5edd0fb4c373 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:02:26 addons-864929 crio[816]: time="2025-10-27 19:02:26.582738290Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3322452c-9312-48f8-9fcd-5edd0fb4c373 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:02:26 addons-864929 crio[816]: time="2025-10-27 19:02:26.583184767Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:adaa598112f171df13e78cd56d399d5cea5583cbab9f70582c179853419f0a95,PodSandboxId:0e930ac960395a1fe60ce33b3d0d23e5074c5bcf2cfcf870738b45425fc094f3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761591555862014978,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e5f3a97-dcd1-44e6-920b-2953ee6ba066,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4aa82535ec10d187f1da703d58159fb09230f78d0581e0f49fbd4acd47482df,PodSandboxId:2e4a1f88f6c72c5d32f4d9fa16c7245440698c2e6c6940465c848ea8e3c1de72,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761591529033470607,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a668ad58-4082-4722-a352-3bd62c30df9b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a32d240f03f8f28e7d4e7a44d8c5ed0615b4f8a512dff263873f19db80541de,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1761591522582012711,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0067ca876ce6c8bdc5053fc40be27170f81485094511709910b16e143a9e2fc4,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1761591521027264059,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd7ab79c70b2b1fe050919ff1dc62a9bd2f43e52e74b896feb06973205b4c86,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1761591519397782799,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c
1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c291b0333c5db7a44ffbeef42ea3e322de328a2db3a212677c23a228d7be117,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1761591515004169043,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba505fec54a4152ff5b929051ca72258b3111a7e5ab73be1ad55ceec66f8fb66,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1761591505993152021,Labels:map[string]string{io.kuber
netes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b250e967b1910669d276d1a5519185d1aadfe512b72ec2b46eb44e2d08b2947c,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3
,State:CONTAINER_RUNNING,CreatedAt:1761591504435510194,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07110c7b3afc08381c95acb068cfe5fd71524933faf9a6815a6f33f2f28c14b5,PodSandboxId:d3fe0c8c9df1bf22576a6f62d4487ebe483778329f044aaf12442f36aefee1c9,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f0
37e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1761591502367296352,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2edb44-d6fd-41c7-aebc-45f7051be9b9,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f241dd9f7205d1dd095138c4d6b056bd582765d0b8e3d8bb89d772bfaae657ad,PodSandboxId:6429ac3aeaf4cf12d6b687f73be67ce1eb08e0da208ef1272ae3514a20ed0c84,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1761591500984769440,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 923a1bc3-3658-4c92-9ac7-f6fb7cc49fdc,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1708c06c7e746239971631d28cc4118fcf7c6f5e0ff884e4193277f8d4fe1045,PodSandboxId:6813db443ad42f30ede5948e121225fd273a9767846892f5abfd1c7e67717754,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761591499124403764,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-9nfvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e133be4d-c9ac-45ee-8523-3197eb5ae1dd,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8f995b816e5743f53a114660fa4536960d4b413e08ae8c78b70a56be317652f,PodSandboxId:41f0fc88d88a67c53a3bd864e17466349b4c7cd2b1545a854eefac2ca9cec7a5,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079
b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761591498953436180,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-t78cg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07e1f13e-a7d4-496f-9f63-f96306459e61,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f7b563f60d965e0e7630e1aee05c605209b846448c5c59f53c0a16a9f9d665d,PodSandboxId:bc410cbb40cb5902da6bdf34d8d5242293447cf15cac9039be7ba9684081f6aa,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Ima
ge:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1761591496414288177,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-xhp4p,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: aca554db-371c-4aad-9edb-8724e17ed917,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11101a79fc0739766a9d1c4f24680be46354448f79c0965c8186d69396bd6de8,PodSandboxId:a34c89c3d97f4534833c02b1092cbe693acbc1d81b74de61458ce121608460c7,Metadata:&Container
Metadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761591486710960910,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-5bx7q,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: ef4b0394-4dee-4b23-bee8-0787117f056f,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47c975912a90508c9994f2e3e844922ac61e9b8efd3e831e0addc0eeb3f78997,P
odSandboxId:eb20897f30dfb36e8f3c34ea19074dcb418adfec59e5f0b0a7e7d7001d52924b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761591461490696586,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-zg4tw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26b73888-1e70-456d-ab70-4392ce52af26,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9580ed2258f1ddc819f6b60b3c
2ef2524bf0b58aa70e0aff2439347be11df4e9,PodSandboxId:d0de4be78d27d9e94647775771a19ac1751580111ae3739d05c71953c1faf14a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761591440550993928,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec5b960-2f51-438a-9968-46e1bea6ddc7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:378ab83eabeec92fc7bf1059eab8071d79c91a
8ed0be14239fcda364f18c73e3,PodSandboxId:a87aa3850ab80908c43af3f2bbb9eca022489f0530ec2b8899475a9ac087e88d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761591431971845716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-f8dfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ada2d5f-c124-4130-8e4d-f5f6f0d2b856,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"
},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c25a92cc96070b1a3ab5a630802cbe36b41664194b39b44b313e4f4f30c3e83c,PodSandboxId:1549458dc06ee22d63cae83ec65fb1b67f7fe3dd07b0cb035e9908c6a184cd2d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761591431223440952,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5grdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ab29d4-f3af-4942-87b0-5b146ec49fd2,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:473b2a7d1d8d4e7a553f7f11a2d0384f3251123a5c4549760e65d8ec7b53033c,PodSandboxId:d582ed9677d49ebcc2ef56ec9d4db2cd633d5a4f0d9dbfb7d9840888bee96671,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761591418598524106,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853670e29e0053cd2968e4d42e8dcd57,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io
.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23a81c0c110d4803997f153b474b99fa2c8dd49df03bcd06e0deab806e84e522,PodSandboxId:34da1388827880873125688a0ac800d701cef134bf76ff2b7101d97c3570ac69,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761591418609004347,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f
4246ed8c9b2f11e40ac4ed620904b3,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4eba041d7c32a438a0e3146d823021913c8d115f0730a389f250907c87a6d45c,PodSandboxId:af045b669200a98e83828b7038b0ba1371f3f501d38a1aaf2a24eaffe8481851,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761591418585889029,Labels:map[string]string{io.kubernetes.container.name: kube-controller-man
ager,io.kubernetes.pod.name: kube-controller-manager-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4738620b04d3027787daeded7d8de7c7,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0eb12ce7e2105c2d5af02d2296b784e8c1e6290e76a00061c712a7d7d680f8e,PodSandboxId:c5570e67c7a56294b118428e679ff8b66f3a3e9b719b89e2b9dfb87dfa3f95f5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Stat
e:CONTAINER_RUNNING,CreatedAt:1761591418576290141,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de27a2c807a456567dcafd8f96dd732,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3322452c-9312-48f8-9fcd-5edd0fb4c373 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:02:26 addons-864929 crio[816]: time="2025-10-27 19:02:26.583883287Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ee29efcb-e551-4edc-ada4-ab1bbb344fa7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:02:26 addons-864929 crio[816]: time="2025-10-27 19:02:26.584013149Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ee29efcb-e551-4edc-ada4-ab1bbb344fa7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:02:26 addons-864929 crio[816]: time="2025-10-27 19:02:26.584519257Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:adaa598112f171df13e78cd56d399d5cea5583cbab9f70582c179853419f0a95,PodSandboxId:0e930ac960395a1fe60ce33b3d0d23e5074c5bcf2cfcf870738b45425fc094f3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761591555862014978,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e5f3a97-dcd1-44e6-920b-2953ee6ba066,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4aa82535ec10d187f1da703d58159fb09230f78d0581e0f49fbd4acd47482df,PodSandboxId:2e4a1f88f6c72c5d32f4d9fa16c7245440698c2e6c6940465c848ea8e3c1de72,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761591529033470607,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a668ad58-4082-4722-a352-3bd62c30df9b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a32d240f03f8f28e7d4e7a44d8c5ed0615b4f8a512dff263873f19db80541de,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1761591522582012711,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0067ca876ce6c8bdc5053fc40be27170f81485094511709910b16e143a9e2fc4,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1761591521027264059,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bd7ab79c70b2b1fe050919ff1dc62a9bd2f43e52e74b896feb06973205b4c86,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1761591519397782799,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c
1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c291b0333c5db7a44ffbeef42ea3e322de328a2db3a212677c23a228d7be117,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1761591515004169043,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba505fec54a4152ff5b929051ca72258b3111a7e5ab73be1ad55ceec66f8fb66,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1761591505993152021,Labels:map[string]string{io.kuber
netes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b250e967b1910669d276d1a5519185d1aadfe512b72ec2b46eb44e2d08b2947c,PodSandboxId:3624eec81ad83c2fe1409cd8fd151fb85588b128a87a071e1d42fe81883bbad7,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3
,State:CONTAINER_RUNNING,CreatedAt:1761591504435510194,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-2kk6q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4df09867-d21a-494d-b1c1-b33d1ae05292,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07110c7b3afc08381c95acb068cfe5fd71524933faf9a6815a6f33f2f28c14b5,PodSandboxId:d3fe0c8c9df1bf22576a6f62d4487ebe483778329f044aaf12442f36aefee1c9,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f0
37e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1761591502367296352,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2edb44-d6fd-41c7-aebc-45f7051be9b9,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f241dd9f7205d1dd095138c4d6b056bd582765d0b8e3d8bb89d772bfaae657ad,PodSandboxId:6429ac3aeaf4cf12d6b687f73be67ce1eb08e0da208ef1272ae3514a20ed0c84,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1761591500984769440,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 923a1bc3-3658-4c92-9ac7-f6fb7cc49fdc,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1708c06c7e746239971631d28cc4118fcf7c6f5e0ff884e4193277f8d4fe1045,PodSandboxId:6813db443ad42f30ede5948e121225fd273a9767846892f5abfd1c7e67717754,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761591499124403764,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-9nfvf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e133be4d-c9ac-45ee-8523-3197eb5ae1dd,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8f995b816e5743f53a114660fa4536960d4b413e08ae8c78b70a56be317652f,PodSandboxId:41f0fc88d88a67c53a3bd864e17466349b4c7cd2b1545a854eefac2ca9cec7a5,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079
b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761591498953436180,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-t78cg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07e1f13e-a7d4-496f-9f63-f96306459e61,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f7b563f60d965e0e7630e1aee05c605209b846448c5c59f53c0a16a9f9d665d,PodSandboxId:bc410cbb40cb5902da6bdf34d8d5242293447cf15cac9039be7ba9684081f6aa,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Ima
ge:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1761591496414288177,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-xhp4p,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: aca554db-371c-4aad-9edb-8724e17ed917,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11101a79fc0739766a9d1c4f24680be46354448f79c0965c8186d69396bd6de8,PodSandboxId:a34c89c3d97f4534833c02b1092cbe693acbc1d81b74de61458ce121608460c7,Metadata:&Container
Metadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761591486710960910,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-5bx7q,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: ef4b0394-4dee-4b23-bee8-0787117f056f,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47c975912a90508c9994f2e3e844922ac61e9b8efd3e831e0addc0eeb3f78997,P
odSandboxId:eb20897f30dfb36e8f3c34ea19074dcb418adfec59e5f0b0a7e7d7001d52924b,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761591461490696586,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-zg4tw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26b73888-1e70-456d-ab70-4392ce52af26,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9580ed2258f1ddc819f6b60b3c
2ef2524bf0b58aa70e0aff2439347be11df4e9,PodSandboxId:d0de4be78d27d9e94647775771a19ac1751580111ae3739d05c71953c1faf14a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761591440550993928,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec5b960-2f51-438a-9968-46e1bea6ddc7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:378ab83eabeec92fc7bf1059eab8071d79c91a
8ed0be14239fcda364f18c73e3,PodSandboxId:a87aa3850ab80908c43af3f2bbb9eca022489f0530ec2b8899475a9ac087e88d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761591431971845716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-f8dfl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ada2d5f-c124-4130-8e4d-f5f6f0d2b856,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"
},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c25a92cc96070b1a3ab5a630802cbe36b41664194b39b44b313e4f4f30c3e83c,PodSandboxId:1549458dc06ee22d63cae83ec65fb1b67f7fe3dd07b0cb035e9908c6a184cd2d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761591431223440952,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5grdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ab29d4-f3af-4942-87b0-5b146ec49fd2,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:473b2a7d1d8d4e7a553f7f11a2d0384f3251123a5c4549760e65d8ec7b53033c,PodSandboxId:d582ed9677d49ebcc2ef56ec9d4db2cd633d5a4f0d9dbfb7d9840888bee96671,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761591418598524106,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853670e29e0053cd2968e4d42e8dcd57,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io
.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23a81c0c110d4803997f153b474b99fa2c8dd49df03bcd06e0deab806e84e522,PodSandboxId:34da1388827880873125688a0ac800d701cef134bf76ff2b7101d97c3570ac69,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761591418609004347,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f
4246ed8c9b2f11e40ac4ed620904b3,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4eba041d7c32a438a0e3146d823021913c8d115f0730a389f250907c87a6d45c,PodSandboxId:af045b669200a98e83828b7038b0ba1371f3f501d38a1aaf2a24eaffe8481851,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761591418585889029,Labels:map[string]string{io.kubernetes.container.name: kube-controller-man
ager,io.kubernetes.pod.name: kube-controller-manager-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4738620b04d3027787daeded7d8de7c7,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0eb12ce7e2105c2d5af02d2296b784e8c1e6290e76a00061c712a7d7d680f8e,PodSandboxId:c5570e67c7a56294b118428e679ff8b66f3a3e9b719b89e2b9dfb87dfa3f95f5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Stat
e:CONTAINER_RUNNING,CreatedAt:1761591418576290141,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-864929,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2de27a2c807a456567dcafd8f96dd732,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ee29efcb-e551-4edc-ada4-ab1bbb344fa7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:02:26 addons-864929 crio[816]: time="2025-10-27 19:02:26.586585637Z" level=debug msg="Too many requests to https://registry-1.docker.io/v2/library/busybox/manifests/stable: sleeping for 2.000000 seconds before next attempt" file="docker/docker_client.go:596"
	Oct 27 19:02:26 addons-864929 crio[816]: time="2025-10-27 19:02:26.588061657Z" level=debug msg="Request: &ImageStatusRequest{Image:&ImageSpec{Image:docker.io/kicbase/echo-server:1.0,Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T19:01:37.245283681Z,kubernetes.io/config.source: api,},UserSpecifiedImage:,RuntimeHandler:,},Verbose:false,}" file="otel-collector/interceptors.go:62" id=edbe37a1-bf3d-41ef-a90b-96b748a914c4 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:02:26 addons-864929 crio[816]: time="2025-10-27 19:02:26.588866093Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" file="server/image_status.go:27" id=edbe37a1-bf3d-41ef-a90b-96b748a914c4 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:02:26 addons-864929 crio[816]: time="2025-10-27 19:02:26.589464499Z" level=debug msg="reference \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]docker.io/kicbase/echo-server:1.0\" does not resolve to an image ID" file="storage/storage_reference.go:149"
	Oct 27 19:02:26 addons-864929 crio[816]: time="2025-10-27 19:02:26.589685206Z" level=debug msg="Can't find docker.io/kicbase/echo-server:1.0" file="server/image_status.go:97" id=edbe37a1-bf3d-41ef-a90b-96b748a914c4 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:02:26 addons-864929 crio[816]: time="2025-10-27 19:02:26.589717002Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" file="server/image_status.go:111" id=edbe37a1-bf3d-41ef-a90b-96b748a914c4 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:02:26 addons-864929 crio[816]: time="2025-10-27 19:02:26.589739284Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" file="server/image_status.go:33" id=edbe37a1-bf3d-41ef-a90b-96b748a914c4 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:02:26 addons-864929 crio[816]: time="2025-10-27 19:02:26.589783664Z" level=debug msg="Response: &ImageStatusResponse{Image:nil,Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=edbe37a1-bf3d-41ef-a90b-96b748a914c4 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	adaa598112f17       docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22                                              3 minutes ago       Running             nginx                                    0                   0e930ac960395       nginx
	c4aa82535ec10       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          3 minutes ago       Running             busybox                                  0                   2e4a1f88f6c72       busybox
	9a32d240f03f8       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          3 minutes ago       Running             csi-snapshotter                          0                   3624eec81ad83       csi-hostpathplugin-2kk6q
	0067ca876ce6c       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago       Running             csi-provisioner                          0                   3624eec81ad83       csi-hostpathplugin-2kk6q
	7bd7ab79c70b2       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            3 minutes ago       Running             liveness-probe                           0                   3624eec81ad83       csi-hostpathplugin-2kk6q
	9c291b0333c5d       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago       Running             hostpath                                 0                   3624eec81ad83       csi-hostpathplugin-2kk6q
	ba505fec54a41       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                4 minutes ago       Running             node-driver-registrar                    0                   3624eec81ad83       csi-hostpathplugin-2kk6q
	b250e967b1910       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   4 minutes ago       Running             csi-external-health-monitor-controller   0                   3624eec81ad83       csi-hostpathplugin-2kk6q
	07110c7b3afc0       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              4 minutes ago       Running             csi-resizer                              0                   d3fe0c8c9df1b       csi-hostpath-resizer-0
	f241dd9f7205d       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             4 minutes ago       Running             csi-attacher                             0                   6429ac3aeaf4c       csi-hostpath-attacher-0
	1708c06c7e746       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      4 minutes ago       Running             volume-snapshot-controller               0                   6813db443ad42       snapshot-controller-7d9fbc56b8-9nfvf
	a8f995b816e57       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      4 minutes ago       Running             volume-snapshot-controller               0                   41f0fc88d88a6       snapshot-controller-7d9fbc56b8-t78cg
	6f7b563f60d96       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             4 minutes ago       Running             local-path-provisioner                   0                   bc410cbb40cb5       local-path-provisioner-648f6765c9-xhp4p
	11101a79fc073       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            4 minutes ago       Running             gadget                                   0                   a34c89c3d97f4       gadget-5bx7q
	47c975912a905       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     4 minutes ago       Running             amd-gpu-device-plugin                    0                   eb20897f30dfb       amd-gpu-device-plugin-zg4tw
	9580ed2258f1d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             5 minutes ago       Running             storage-provisioner                      0                   d0de4be78d27d       storage-provisioner
	378ab83eabeec       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             5 minutes ago       Running             coredns                                  0                   a87aa3850ab80       coredns-66bc5c9577-f8dfl
	c25a92cc96070       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             5 minutes ago       Running             kube-proxy                               0                   1549458dc06ee       kube-proxy-5grdt
	23a81c0c110d4       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             5 minutes ago       Running             kube-scheduler                           0                   34da138882788       kube-scheduler-addons-864929
	473b2a7d1d8d4       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             5 minutes ago       Running             etcd                                     0                   d582ed9677d49       etcd-addons-864929
	4eba041d7c32a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             5 minutes ago       Running             kube-controller-manager                  0                   af045b669200a       kube-controller-manager-addons-864929
	a0eb12ce7e210       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             5 minutes ago       Running             kube-apiserver                           0                   c5570e67c7a56       kube-apiserver-addons-864929
	
	
	==> coredns [378ab83eabeec92fc7bf1059eab8071d79c91a8ed0be14239fcda364f18c73e3] <==
	[INFO] 10.244.0.22:33406 - 12503 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000417521s
	[INFO] 10.244.0.22:45442 - 11795 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000290954s
	[INFO] 10.244.0.22:33406 - 7960 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000335707s
	[INFO] 10.244.0.22:45442 - 20036 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000129949s
	[INFO] 10.244.0.22:33406 - 4331 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000324272s
	[INFO] 10.244.0.22:45442 - 44888 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000091189s
	[INFO] 10.244.0.22:45442 - 15636 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000099308s
	[INFO] 10.244.0.22:33406 - 30775 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000370838s
	[INFO] 10.244.0.22:45442 - 19061 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000449213s
	[INFO] 10.244.0.22:45442 - 45959 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000473322s
	[INFO] 10.244.0.22:33406 - 32269 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000888744s
	[INFO] 10.244.0.22:38947 - 62796 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000167539s
	[INFO] 10.244.0.22:38947 - 23906 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000087006s
	[INFO] 10.244.0.22:38947 - 43877 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000080495s
	[INFO] 10.244.0.22:38947 - 21432 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000077684s
	[INFO] 10.244.0.22:38947 - 62211 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000065991s
	[INFO] 10.244.0.22:38947 - 59955 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000118183s
	[INFO] 10.244.0.22:50942 - 2529 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.001444462s
	[INFO] 10.244.0.22:38947 - 39721 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000293034s
	[INFO] 10.244.0.22:50942 - 14095 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000123595s
	[INFO] 10.244.0.22:50942 - 7851 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000169449s
	[INFO] 10.244.0.22:50942 - 36653 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000129196s
	[INFO] 10.244.0.22:50942 - 46135 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000136478s
	[INFO] 10.244.0.22:50942 - 14917 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000138736s
	[INFO] 10.244.0.22:50942 - 51502 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000077287s
	
	
	==> describe nodes <==
	Name:               addons-864929
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-864929
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=addons-864929
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T18_57_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-864929
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-864929"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 18:57:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-864929
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 19:02:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 18:59:40 +0000   Mon, 27 Oct 2025 18:56:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 18:59:40 +0000   Mon, 27 Oct 2025 18:56:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 18:59:40 +0000   Mon, 27 Oct 2025 18:56:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 18:59:40 +0000   Mon, 27 Oct 2025 18:57:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.216
	  Hostname:    addons-864929
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 780db33d391d49adb77a2a509bc06274
	  System UUID:                780db33d-391d-49ad-b77a-2a509bc06274
	  Boot ID:                    6fa66b3e-a553-40c9-b7f0-71dd11966be5
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  default                     hello-world-app-5d498dc89-wmhrh            0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	  default                     task-pv-pod                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m55s
	  default                     test-local-path                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m5s
	  gadget                      gadget-5bx7q                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 amd-gpu-device-plugin-zg4tw                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 coredns-66bc5c9577-f8dfl                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m16s
	  kube-system                 csi-hostpath-attacher-0                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 csi-hostpath-resizer-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 csi-hostpathplugin-2kk6q                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 etcd-addons-864929                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m21s
	  kube-system                 kube-apiserver-addons-864929               250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 kube-controller-manager-addons-864929      200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-proxy-5grdt                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m17s
	  kube-system                 kube-scheduler-addons-864929               100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 snapshot-controller-7d9fbc56b8-9nfvf       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 snapshot-controller-7d9fbc56b8-t78cg       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m9s
	  local-path-storage          local-path-provisioner-648f6765c9-xhp4p    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m14s  kube-proxy       
	  Normal  Starting                 5m21s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m21s  kubelet          Node addons-864929 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m21s  kubelet          Node addons-864929 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m21s  kubelet          Node addons-864929 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m20s  kubelet          Node addons-864929 status is now: NodeReady
	  Normal  RegisteredNode           5m17s  node-controller  Node addons-864929 event: Registered Node addons-864929 in Controller
	
	
	==> dmesg <==
	[  +1.035983] kauditd_printk_skb: 321 callbacks suppressed
	[  +0.074749] kauditd_printk_skb: 215 callbacks suppressed
	[  +0.252144] kauditd_printk_skb: 390 callbacks suppressed
	[ +13.923984] kauditd_printk_skb: 64 callbacks suppressed
	[  +7.170668] kauditd_printk_skb: 20 callbacks suppressed
	[  +6.426658] kauditd_printk_skb: 32 callbacks suppressed
	[Oct27 18:58] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.493718] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.181992] kauditd_printk_skb: 32 callbacks suppressed
	[  +1.064652] kauditd_printk_skb: 94 callbacks suppressed
	[  +0.000029] kauditd_printk_skb: 37 callbacks suppressed
	[  +0.654510] kauditd_printk_skb: 156 callbacks suppressed
	[  +5.691951] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.014421] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.186188] kauditd_printk_skb: 26 callbacks suppressed
	[ +13.043727] kauditd_printk_skb: 47 callbacks suppressed
	[Oct27 18:59] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.809040] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.269736] kauditd_printk_skb: 141 callbacks suppressed
	[  +0.027386] kauditd_printk_skb: 85 callbacks suppressed
	[  +5.740720] kauditd_printk_skb: 139 callbacks suppressed
	[ +11.255527] kauditd_printk_skb: 58 callbacks suppressed
	[Oct27 19:01] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.261320] kauditd_printk_skb: 46 callbacks suppressed
	[Oct27 19:02] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [473b2a7d1d8d4e7a553f7f11a2d0384f3251123a5c4549760e65d8ec7b53033c] <==
	{"level":"info","ts":"2025-10-27T18:57:56.410315Z","caller":"traceutil/trace.go:172","msg":"trace[229503704] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:948; }","duration":"134.367662ms","start":"2025-10-27T18:57:56.275938Z","end":"2025-10-27T18:57:56.410305Z","steps":["trace[229503704] 'agreement among raft nodes before linearized reading'  (duration: 132.957173ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:57:56.410060Z","caller":"traceutil/trace.go:172","msg":"trace[891786162] linearizableReadLoop","detail":"{readStateIndex:975; appliedIndex:975; }","duration":"131.983723ms","start":"2025-10-27T18:57:56.275942Z","end":"2025-10-27T18:57:56.407926Z","steps":["trace[891786162] 'read index received'  (duration: 131.979399ms)","trace[891786162] 'applied index is now lower than readState.Index'  (duration: 3.544µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T18:57:56.412263Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.94226ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T18:57:56.412309Z","caller":"traceutil/trace.go:172","msg":"trace[262639361] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:948; }","duration":"119.995829ms","start":"2025-10-27T18:57:56.292305Z","end":"2025-10-27T18:57:56.412301Z","steps":["trace[262639361] 'agreement among raft nodes before linearized reading'  (duration: 119.922856ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:58:11.125078Z","caller":"traceutil/trace.go:172","msg":"trace[493772090] linearizableReadLoop","detail":"{readStateIndex:1016; appliedIndex:1016; }","duration":"108.067998ms","start":"2025-10-27T18:58:11.016880Z","end":"2025-10-27T18:58:11.124948Z","steps":["trace[493772090] 'read index received'  (duration: 108.0588ms)","trace[493772090] 'applied index is now lower than readState.Index'  (duration: 7.728µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T18:58:11.125323Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.422079ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" limit:1 ","response":"range_response_count:1 size:3021"}
	{"level":"info","ts":"2025-10-27T18:58:11.125351Z","caller":"traceutil/trace.go:172","msg":"trace[2111825061] range","detail":"{range_begin:/registry/configmaps/kube-system/extension-apiserver-authentication; range_end:; response_count:1; response_revision:984; }","duration":"108.467942ms","start":"2025-10-27T18:58:11.016877Z","end":"2025-10-27T18:58:11.125345Z","steps":["trace[2111825061] 'agreement among raft nodes before linearized reading'  (duration: 108.282493ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:58:11.125763Z","caller":"traceutil/trace.go:172","msg":"trace[1553925984] transaction","detail":"{read_only:false; response_revision:985; number_of_response:1; }","duration":"186.294868ms","start":"2025-10-27T18:58:10.939461Z","end":"2025-10-27T18:58:11.125756Z","steps":["trace[1553925984] 'process raft request'  (duration: 186.212532ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:58:11.142309Z","caller":"traceutil/trace.go:172","msg":"trace[839786025] transaction","detail":"{read_only:false; response_revision:986; number_of_response:1; }","duration":"138.309645ms","start":"2025-10-27T18:58:11.003986Z","end":"2025-10-27T18:58:11.142296Z","steps":["trace[839786025] 'process raft request'  (duration: 138.098647ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:58:13.531058Z","caller":"traceutil/trace.go:172","msg":"trace[30205562] linearizableReadLoop","detail":"{readStateIndex:1025; appliedIndex:1025; }","duration":"254.599969ms","start":"2025-10-27T18:58:13.276437Z","end":"2025-10-27T18:58:13.531037Z","steps":["trace[30205562] 'read index received'  (duration: 254.54701ms)","trace[30205562] 'applied index is now lower than readState.Index'  (duration: 3.554µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T18:58:13.531448Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"255.007373ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T18:58:13.531551Z","caller":"traceutil/trace.go:172","msg":"trace[1564347891] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:993; }","duration":"255.121817ms","start":"2025-10-27T18:58:13.276412Z","end":"2025-10-27T18:58:13.531534Z","steps":["trace[1564347891] 'agreement among raft nodes before linearized reading'  (duration: 254.972595ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:58:13.531509Z","caller":"traceutil/trace.go:172","msg":"trace[1892686575] transaction","detail":"{read_only:false; response_revision:994; number_of_response:1; }","duration":"391.579159ms","start":"2025-10-27T18:58:13.139914Z","end":"2025-10-27T18:58:13.531493Z","steps":["trace[1892686575] 'process raft request'  (duration: 391.354515ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T18:58:13.531824Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-27T18:58:13.139894Z","time spent":"391.808403ms","remote":"127.0.0.1:52894","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:985 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-10-27T18:58:13.532035Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.107659ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T18:58:13.532079Z","caller":"traceutil/trace.go:172","msg":"trace[2038100128] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:994; }","duration":"126.149586ms","start":"2025-10-27T18:58:13.405923Z","end":"2025-10-27T18:58:13.532072Z","steps":["trace[2038100128] 'agreement among raft nodes before linearized reading'  (duration: 126.101237ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T18:58:13.531900Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"238.553844ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T18:58:13.532339Z","caller":"traceutil/trace.go:172","msg":"trace[854445731] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:994; }","duration":"239.005326ms","start":"2025-10-27T18:58:13.293326Z","end":"2025-10-27T18:58:13.532332Z","steps":["trace[854445731] 'agreement among raft nodes before linearized reading'  (duration: 238.54211ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T18:58:34.711927Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.634065ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T18:58:34.712061Z","caller":"traceutil/trace.go:172","msg":"trace[895249490] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1119; }","duration":"113.796373ms","start":"2025-10-27T18:58:34.598253Z","end":"2025-10-27T18:58:34.712049Z","steps":["trace[895249490] 'range keys from in-memory index tree'  (duration: 113.587415ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T18:58:38.243222Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"193.660222ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T18:58:38.243481Z","caller":"traceutil/trace.go:172","msg":"trace[698536657] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses; range_end:; response_count:0; response_revision:1136; }","duration":"193.931351ms","start":"2025-10-27T18:58:38.049536Z","end":"2025-10-27T18:58:38.243467Z","steps":["trace[698536657] 'range keys from in-memory index tree'  (duration: 193.593238ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:58:42.492190Z","caller":"traceutil/trace.go:172","msg":"trace[1973944569] transaction","detail":"{read_only:false; response_revision:1165; number_of_response:1; }","duration":"119.999969ms","start":"2025-10-27T18:58:42.372178Z","end":"2025-10-27T18:58:42.492178Z","steps":["trace[1973944569] 'process raft request'  (duration: 119.899102ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:59:11.698647Z","caller":"traceutil/trace.go:172","msg":"trace[361898695] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1348; }","duration":"135.106526ms","start":"2025-10-27T18:59:11.563481Z","end":"2025-10-27T18:59:11.698587Z","steps":["trace[361898695] 'process raft request'  (duration: 135.018245ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:59:14.141155Z","caller":"traceutil/trace.go:172","msg":"trace[837123529] transaction","detail":"{read_only:false; response_revision:1379; number_of_response:1; }","duration":"206.995462ms","start":"2025-10-27T18:59:13.934147Z","end":"2025-10-27T18:59:14.141142Z","steps":["trace[837123529] 'process raft request'  (duration: 206.907826ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:02:27 up 5 min,  0 users,  load average: 0.52, 1.44, 0.82
	Linux addons-864929 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [a0eb12ce7e2105c2d5af02d2296b784e8c1e6290e76a00061c712a7d7d680f8e] <==
	W1027 18:57:22.323729       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1027 18:57:22.344896       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1027 18:57:23.882919       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.107.65.231"}
	W1027 18:57:39.184847       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1027 18:57:39.206284       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1027 18:57:39.243149       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1027 18:57:39.253377       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1027 18:58:11.250340       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.9.84:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.9.84:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.9.84:443: connect: connection refused" logger="UnhandledError"
	W1027 18:58:11.250699       1 handler_proxy.go:99] no RequestInfo found in the context
	E1027 18:58:11.250761       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1027 18:58:11.256876       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.9.84:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.9.84:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.9.84:443: connect: connection refused" logger="UnhandledError"
	E1027 18:58:11.257462       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.9.84:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.9.84:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.9.84:443: connect: connection refused" logger="UnhandledError"
	E1027 18:58:11.269028       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.9.84:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.9.84:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.9.84:443: connect: connection refused" logger="UnhandledError"
	E1027 18:58:11.311326       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.9.84:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.9.84:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.9.84:443: connect: connection refused" logger="UnhandledError"
	I1027 18:58:11.522386       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1027 18:58:56.253891       1 conn.go:339] Error on socket receive: read tcp 192.168.39.216:8443->192.168.39.1:59114: use of closed network connection
	E1027 18:58:56.463232       1 conn.go:339] Error on socket receive: read tcp 192.168.39.216:8443->192.168.39.1:59134: use of closed network connection
	I1027 18:59:05.726497       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.152.62"}
	I1027 18:59:12.082722       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1027 18:59:12.280737       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1027 18:59:12.320355       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.254.157"}
	I1027 19:01:37.350902       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.103.64"}
	
	
	==> kube-controller-manager [4eba041d7c32a438a0e3146d823021913c8d115f0730a389f250907c87a6d45c] <==
	I1027 18:57:09.197029       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 18:57:09.197199       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 18:57:09.198769       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1027 18:57:09.199421       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1027 18:57:09.199859       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 18:57:09.202220       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 18:57:09.202262       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 18:57:09.204812       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-864929" podCIDRs=["10.244.0.0/24"]
	I1027 18:57:09.205058       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 18:57:09.205335       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 18:57:09.209450       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	E1027 18:57:17.574464       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1027 18:57:39.171065       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1027 18:57:39.171215       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1027 18:57:39.171282       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1027 18:57:39.201480       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1027 18:57:39.217694       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1027 18:57:39.272250       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 18:57:39.320974       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1027 18:58:09.292080       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1027 18:58:09.340459       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1027 18:59:09.765540       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I1027 18:59:29.759701       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I1027 18:59:41.557310       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I1027 19:01:52.106852       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	
	
	==> kube-proxy [c25a92cc96070b1a3ab5a630802cbe36b41664194b39b44b313e4f4f30c3e83c] <==
	I1027 18:57:11.964888       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 18:57:12.066455       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 18:57:12.066978       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.216"]
	E1027 18:57:12.067747       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 18:57:12.441037       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1027 18:57:12.441091       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1027 18:57:12.441116       1 server_linux.go:132] "Using iptables Proxier"
	I1027 18:57:12.549755       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 18:57:12.551449       1 server.go:527] "Version info" version="v1.34.1"
	I1027 18:57:12.551483       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 18:57:12.643682       1 config.go:200] "Starting service config controller"
	I1027 18:57:12.643795       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 18:57:12.644779       1 config.go:106] "Starting endpoint slice config controller"
	I1027 18:57:12.644795       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 18:57:12.644821       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 18:57:12.644825       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 18:57:12.652942       1 config.go:309] "Starting node config controller"
	I1027 18:57:12.654707       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 18:57:12.654716       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 18:57:12.746008       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 18:57:12.746581       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 18:57:12.760983       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [23a81c0c110d4803997f153b474b99fa2c8dd49df03bcd06e0deab806e84e522] <==
	E1027 18:57:02.235831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 18:57:02.235898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 18:57:02.236336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 18:57:02.236405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 18:57:02.236138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 18:57:02.236633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 18:57:02.236754       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 18:57:02.237054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 18:57:02.237146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 18:57:02.237161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 18:57:03.169999       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 18:57:03.170507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 18:57:03.173314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 18:57:03.241827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 18:57:03.244384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 18:57:03.277509       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 18:57:03.311109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 18:57:03.348245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 18:57:03.360178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1027 18:57:03.360672       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 18:57:03.390147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 18:57:03.532742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 18:57:03.622727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 18:57:03.635923       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1027 18:57:06.218759       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 19:01:45 addons-864929 kubelet[1502]: E1027 19:01:45.411008    1502 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8806283adced3b462f1661f7b82158e81dfd3ee30abbda8de593e5214c7fbeb\": container with ID starting with b8806283adced3b462f1661f7b82158e81dfd3ee30abbda8de593e5214c7fbeb not found: ID does not exist" containerID="b8806283adced3b462f1661f7b82158e81dfd3ee30abbda8de593e5214c7fbeb"
	Oct 27 19:01:45 addons-864929 kubelet[1502]: I1027 19:01:45.411346    1502 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8806283adced3b462f1661f7b82158e81dfd3ee30abbda8de593e5214c7fbeb"} err="failed to get container status \"b8806283adced3b462f1661f7b82158e81dfd3ee30abbda8de593e5214c7fbeb\": rpc error: code = NotFound desc = could not find container \"b8806283adced3b462f1661f7b82158e81dfd3ee30abbda8de593e5214c7fbeb\": container with ID starting with b8806283adced3b462f1661f7b82158e81dfd3ee30abbda8de593e5214c7fbeb not found: ID does not exist"
	Oct 27 19:01:45 addons-864929 kubelet[1502]: E1027 19:01:45.748966    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761591705748468600  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:552224}  inodes_used:{value:191}}"
	Oct 27 19:01:45 addons-864929 kubelet[1502]: E1027 19:01:45.749013    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761591705748468600  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:552224}  inodes_used:{value:191}}"
	Oct 27 19:01:52 addons-864929 kubelet[1502]: I1027 19:01:52.305982    1502 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-zg4tw" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 19:01:55 addons-864929 kubelet[1502]: E1027 19:01:55.751934    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761591715751446992  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:552224}  inodes_used:{value:191}}"
	Oct 27 19:01:55 addons-864929 kubelet[1502]: E1027 19:01:55.751978    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761591715751446992  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:552224}  inodes_used:{value:191}}"
	Oct 27 19:01:56 addons-864929 kubelet[1502]: E1027 19:01:56.437946    1502 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 27 19:01:56 addons-864929 kubelet[1502]: E1027 19:01:56.438012    1502 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 27 19:01:56 addons-864929 kubelet[1502]: E1027 19:01:56.438226    1502 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(504b682e-4d7e-4f98-913e-efaa9ccfd4a1): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 27 19:01:56 addons-864929 kubelet[1502]: E1027 19:01:56.438270    1502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="504b682e-4d7e-4f98-913e-efaa9ccfd4a1"
	Oct 27 19:02:05 addons-864929 kubelet[1502]: E1027 19:02:05.757352    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761591725756266726  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:552224}  inodes_used:{value:191}}"
	Oct 27 19:02:05 addons-864929 kubelet[1502]: E1027 19:02:05.757442    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761591725756266726  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:552224}  inodes_used:{value:191}}"
	Oct 27 19:02:08 addons-864929 kubelet[1502]: I1027 19:02:08.100226    1502 scope.go:117] "RemoveContainer" containerID="d3d0a056d958ba0e9df5b60dee569ae445476e826c493b74d7553383ff024320"
	Oct 27 19:02:08 addons-864929 kubelet[1502]: I1027 19:02:08.222378    1502 scope.go:117] "RemoveContainer" containerID="fb2c43aca3bd88ad2b0df10f83323a46e0f850b0a9fa20cbddf8353f1fcdc4ab"
	Oct 27 19:02:09 addons-864929 kubelet[1502]: E1027 19:02:09.306392    1502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="504b682e-4d7e-4f98-913e-efaa9ccfd4a1"
	Oct 27 19:02:15 addons-864929 kubelet[1502]: E1027 19:02:15.761167    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761591735760508420  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:552224}  inodes_used:{value:191}}"
	Oct 27 19:02:15 addons-864929 kubelet[1502]: E1027 19:02:15.761359    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761591735760508420  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:552224}  inodes_used:{value:191}}"
	Oct 27 19:02:25 addons-864929 kubelet[1502]: E1027 19:02:25.764762    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761591745764220857  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:552224}  inodes_used:{value:191}}"
	Oct 27 19:02:25 addons-864929 kubelet[1502]: E1027 19:02:25.764800    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761591745764220857  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:552224}  inodes_used:{value:191}}"
	Oct 27 19:02:26 addons-864929 kubelet[1502]: E1027 19:02:26.529097    1502 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kicbase/echo-server:1.0"
	Oct 27 19:02:26 addons-864929 kubelet[1502]: E1027 19:02:26.529168    1502 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kicbase/echo-server:1.0"
	Oct 27 19:02:26 addons-864929 kubelet[1502]: E1027 19:02:26.529476    1502 kuberuntime_manager.go:1449] "Unhandled Error" err="container hello-world-app start failed in pod hello-world-app-5d498dc89-wmhrh_default(ce19a12f-43e8-4993-a64c-ef90bd25467c): ErrImagePull: reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 27 19:02:26 addons-864929 kubelet[1502]: E1027 19:02:26.529584    1502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ErrImagePull: \"reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-5d498dc89-wmhrh" podUID="ce19a12f-43e8-4993-a64c-ef90bd25467c"
	Oct 27 19:02:26 addons-864929 kubelet[1502]: E1027 19:02:26.590372    1502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-5d498dc89-wmhrh" podUID="ce19a12f-43e8-4993-a64c-ef90bd25467c"
	
	
	==> storage-provisioner [9580ed2258f1ddc819f6b60b3c2ef2524bf0b58aa70e0aff2439347be11df4e9] <==
	W1027 19:02:01.345391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:03.349525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:03.357363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:05.361132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:05.367309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:07.370475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:07.379444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:09.383324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:09.394832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:11.399069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:11.407190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:13.412214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:13.420051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:15.424807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:15.430665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:17.433998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:17.440051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:19.444956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:19.453345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:21.457136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:21.464296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:23.468552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:23.478103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:25.482775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:25.495032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-864929 -n addons-864929
helpers_test.go:269: (dbg) Run:  kubectl --context addons-864929 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-wmhrh task-pv-pod test-local-path
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-864929 describe pod hello-world-app-5d498dc89-wmhrh task-pv-pod test-local-path
helpers_test.go:290: (dbg) kubectl --context addons-864929 describe pod hello-world-app-5d498dc89-wmhrh task-pv-pod test-local-path:

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-wmhrh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-864929/192.168.39.216
	Start Time:       Mon, 27 Oct 2025 19:01:37 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.31
	IPs:
	  IP:           10.244.0.31
	Controlled By:  ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xpvrk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xpvrk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age   From               Message
	  ----     ------     ----  ----               -------
	  Normal   Scheduled  50s   default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-wmhrh to addons-864929
	  Normal   Pulling    50s   kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	  Warning  Failed     1s    kubelet            Failed to pull image "docker.io/kicbase/echo-server:1.0": reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     1s    kubelet            Error: ErrImagePull
	  Normal   BackOff    1s    kubelet            Back-off pulling image "docker.io/kicbase/echo-server:1.0"
	  Warning  Failed     1s    kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-864929/192.168.39.216
	Start Time:       Mon, 27 Oct 2025 18:59:31 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6h8cn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-6h8cn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  2m56s               default-scheduler  Successfully assigned default/task-pv-pod to addons-864929
	  Warning  Failed     31s (x2 over 2m1s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     31s (x2 over 2m1s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    18s (x2 over 2m1s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     18s (x2 over 2m1s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    3s (x3 over 2m55s)  kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-864929/192.168.39.216
	Start Time:       Mon, 27 Oct 2025 18:59:25 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mgjnr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-mgjnr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m2s                 default-scheduler  Successfully assigned default/test-local-path to addons-864929
	  Warning  Failed     2m32s                kubelet            Failed to pull image "busybox:stable": reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     61s (x2 over 2m32s)  kubelet            Error: ErrImagePull
	  Warning  Failed     61s                  kubelet            Failed to pull image "busybox:stable": fetching target platform image selected from image index: reading manifest sha256:4a35a7836fe08f340a42e25c4ac5eef4439585bbbb817b7bd28b2cd87c742642 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    50s (x2 over 2m31s)  kubelet            Back-off pulling image "busybox:stable"
	  Warning  Failed     50s (x2 over 2m31s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    38s (x3 over 3m2s)   kubelet            Pulling image "busybox:stable"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestAddons/parallel/LocalPath FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-864929 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-864929 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.821230332s)
--- FAIL: TestAddons/parallel/LocalPath (229.68s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-074768 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-074768 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-074768 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-074768 --alsologtostderr -v=1] stderr:
I1027 19:12:16.042725   69030 out.go:360] Setting OutFile to fd 1 ...
I1027 19:12:16.042997   69030 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:12:16.043011   69030 out.go:374] Setting ErrFile to fd 2...
I1027 19:12:16.043016   69030 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:12:16.043327   69030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-58821/.minikube/bin
I1027 19:12:16.043713   69030 mustload.go:65] Loading cluster: functional-074768
I1027 19:12:16.044234   69030 config.go:182] Loaded profile config "functional-074768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 19:12:16.046960   69030 host.go:66] Checking if "functional-074768" exists ...
I1027 19:12:16.047342   69030 api_server.go:166] Checking apiserver status ...
I1027 19:12:16.047437   69030 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1027 19:12:16.050917   69030 main.go:141] libmachine: domain functional-074768 has defined MAC address 52:54:00:de:c6:59 in network mk-functional-074768
I1027 19:12:16.051435   69030 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:de:c6:59", ip: ""} in network mk-functional-074768: {Iface:virbr1 ExpiryTime:2025-10-27 20:09:46 +0000 UTC Type:0 Mac:52:54:00:de:c6:59 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:functional-074768 Clientid:01:52:54:00:de:c6:59}
I1027 19:12:16.051474   69030 main.go:141] libmachine: domain functional-074768 has defined IP address 192.168.39.117 and MAC address 52:54:00:de:c6:59 in network mk-functional-074768
I1027 19:12:16.051720   69030 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/functional-074768/id_rsa Username:docker}
I1027 19:12:16.194878   69030 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6515/cgroup
W1027 19:12:16.219823   69030 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/6515/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1027 19:12:16.219941   69030 ssh_runner.go:195] Run: ls
I1027 19:12:16.231559   69030 api_server.go:253] Checking apiserver healthz at https://192.168.39.117:8441/healthz ...
I1027 19:12:16.241533   69030 api_server.go:279] https://192.168.39.117:8441/healthz returned 200:
ok
W1027 19:12:16.241600   69030 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1027 19:12:16.241843   69030 config.go:182] Loaded profile config "functional-074768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 19:12:16.241864   69030 addons.go:69] Setting dashboard=true in profile "functional-074768"
I1027 19:12:16.241875   69030 addons.go:238] Setting addon dashboard=true in "functional-074768"
I1027 19:12:16.241914   69030 host.go:66] Checking if "functional-074768" exists ...
I1027 19:12:16.246429   69030 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1027 19:12:16.248139   69030 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1027 19:12:16.249475   69030 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1027 19:12:16.249497   69030 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1027 19:12:16.252791   69030 main.go:141] libmachine: domain functional-074768 has defined MAC address 52:54:00:de:c6:59 in network mk-functional-074768
I1027 19:12:16.253364   69030 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:de:c6:59", ip: ""} in network mk-functional-074768: {Iface:virbr1 ExpiryTime:2025-10-27 20:09:46 +0000 UTC Type:0 Mac:52:54:00:de:c6:59 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:functional-074768 Clientid:01:52:54:00:de:c6:59}
I1027 19:12:16.253397   69030 main.go:141] libmachine: domain functional-074768 has defined IP address 192.168.39.117 and MAC address 52:54:00:de:c6:59 in network mk-functional-074768
I1027 19:12:16.253605   69030 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/functional-074768/id_rsa Username:docker}
I1027 19:12:16.388645   69030 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1027 19:12:16.388683   69030 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1027 19:12:16.416739   69030 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1027 19:12:16.416767   69030 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1027 19:12:16.449531   69030 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1027 19:12:16.449559   69030 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1027 19:12:16.493238   69030 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1027 19:12:16.493261   69030 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1027 19:12:16.547450   69030 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I1027 19:12:16.547484   69030 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1027 19:12:16.615427   69030 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1027 19:12:16.615457   69030 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1027 19:12:16.700498   69030 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1027 19:12:16.700540   69030 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1027 19:12:16.749846   69030 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1027 19:12:16.749883   69030 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1027 19:12:16.796269   69030 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1027 19:12:16.796299   69030 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1027 19:12:16.863384   69030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1027 19:12:18.083414   69030 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.219980124s)
I1027 19:12:18.085446   69030 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-074768 addons enable metrics-server

                                                
                                                
I1027 19:12:18.086775   69030 addons.go:201] Writing out "functional-074768" config to set dashboard=true...
W1027 19:12:18.087130   69030 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1027 19:12:18.088099   69030 kapi.go:59] client config for functional-074768: &rest.Config{Host:"https://192.168.39.117:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21801-58821/.minikube/profiles/functional-074768/client.crt", KeyFile:"/home/jenkins/minikube-integration/21801-58821/.minikube/profiles/functional-074768/client.key", CAFile:"/home/jenkins/minikube-integration/21801-58821/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c4e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1027 19:12:18.088523   69030 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1027 19:12:18.088540   69030 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1027 19:12:18.088545   69030 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1027 19:12:18.088549   69030 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1027 19:12:18.088552   69030 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1027 19:12:18.102737   69030 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  47ec6d65-8d51-40c4-a557-de428429bb02 779 0 2025-10-27 19:12:18 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-10-27 19:12:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.111.109.120,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.111.109.120],IPFamilies:[IPv4],AllocateLoadBalan
cerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1027 19:12:18.102937   69030 out.go:285] * Launching proxy ...
* Launching proxy ...
I1027 19:12:18.103049   69030 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-074768 proxy --port 36195]
I1027 19:12:18.103554   69030 dashboard.go:157] Waiting for kubectl to output host:port ...
I1027 19:12:18.151832   69030 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W1027 19:12:18.151885   69030 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1027 19:12:18.175085   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[55306418-a223-4570-8e13-1cfeedb398a2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:12:18 GMT]] Body:0xc00150a600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00040cdc0 TLS:<nil>}
I1027 19:12:18.175198   69030 retry.go:31] will retry after 108.004µs: Temporary Error: unexpected response code: 503
I1027 19:12:18.184872   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[49fe45d6-d9e7-49c9-921d-7032d01b9c95] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:12:18 GMT]] Body:0xc0016ba600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00043e3c0 TLS:<nil>}
I1027 19:12:18.184974   69030 retry.go:31] will retry after 120.938µs: Temporary Error: unexpected response code: 503
I1027 19:12:18.195430   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3511a9c0-5869-4df7-a422-75d52035cd46] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:12:18 GMT]] Body:0xc000c20dc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00040d2c0 TLS:<nil>}
I1027 19:12:18.195530   69030 retry.go:31] will retry after 157.706µs: Temporary Error: unexpected response code: 503
I1027 19:12:18.207279   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0b1da5e7-2d3b-438a-b70f-fbafe075f110] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:12:18 GMT]] Body:0xc00150a700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00029ec80 TLS:<nil>}
I1027 19:12:18.207344   69030 retry.go:31] will retry after 467.767µs: Temporary Error: unexpected response code: 503
I1027 19:12:18.213854   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5c356724-1f22-4126-a71e-fc04071e1eb9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:12:18 GMT]] Body:0xc0016ba700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00043e500 TLS:<nil>}
I1027 19:12:18.213931   69030 retry.go:31] will retry after 381.288µs: Temporary Error: unexpected response code: 503
I1027 19:12:18.222316   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4d376440-4746-4226-a0b1-e1e6e1168fdd] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:12:18 GMT]] Body:0xc00150a800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00040d680 TLS:<nil>}
I1027 19:12:18.222384   69030 retry.go:31] will retry after 967.391µs: Temporary Error: unexpected response code: 503
I1027 19:12:18.228852   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3b6f70bc-99ae-4014-830d-db66be99b85f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:12:18 GMT]] Body:0xc0016ba800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00043e640 TLS:<nil>}
I1027 19:12:18.228924   69030 retry.go:31] will retry after 1.441948ms: Temporary Error: unexpected response code: 503
I1027 19:12:18.239783   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6bfa0dce-b326-4a54-af3c-7fb54f3b379a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:12:18 GMT]] Body:0xc000c20f00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00040d7c0 TLS:<nil>}
I1027 19:12:18.239844   69030 retry.go:31] will retry after 1.538192ms: Temporary Error: unexpected response code: 503
I1027 19:12:18.261837   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[72b4d78f-5bf5-466d-a5fe-cc6928f73f42] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:12:18 GMT]] Body:0xc0016ba900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00029edc0 TLS:<nil>}
I1027 19:12:18.261915   69030 retry.go:31] will retry after 2.708968ms: Temporary Error: unexpected response code: 503
I1027 19:12:18.275854   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8c48f9e1-868b-4382-bc18-5c399fa78854] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:12:18 GMT]] Body:0xc000c21000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00040da40 TLS:<nil>}
I1027 19:12:18.275945   69030 retry.go:31] will retry after 3.553494ms: Temporary Error: unexpected response code: 503
I1027 19:12:18.295881   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[08448171-c158-4d80-a942-79e87ea26d6b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:12:18 GMT]] Body:0xc000c210c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00029ef00 TLS:<nil>}
I1027 19:12:18.295965   69030 retry.go:31] will retry after 3.944034ms: Temporary Error: unexpected response code: 503
I1027 19:12:18.309106   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9fe1696f-99d8-48cc-9807-5e63dfae0f4b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:12:18 GMT]] Body:0xc00150a900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00029f040 TLS:<nil>}
I1027 19:12:18.309171   69030 retry.go:31] will retry after 7.876005ms: Temporary Error: unexpected response code: 503
I1027 19:12:18.322166   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ad22b9f7-5637-47bb-8153-917bf1b463ac] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:12:18 GMT]] Body:0xc000c211c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00043e780 TLS:<nil>}
I1027 19:12:18.322233   69030 retry.go:31] will retry after 11.771904ms: Temporary Error: unexpected response code: 503
I1027 19:12:18.340483   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[06ddf291-dc8f-469e-88ba-17dffc628917] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:12:18 GMT]] Body:0xc00150aa00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00029f180 TLS:<nil>}
I1027 19:12:18.340564   69030 retry.go:31] will retry after 20.319886ms: Temporary Error: unexpected response code: 503
I1027 19:12:18.365027   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[90325881-a7d0-4e61-812e-4d84185508c8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:12:18 GMT]] Body:0xc00150aac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00043ea00 TLS:<nil>}
I1027 19:12:18.365120   69030 retry.go:31] will retry after 35.829207ms: Temporary Error: unexpected response code: 503
I1027 19:12:18.408247   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fa9bad05-79de-4934-9921-50afe65dc7d8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:12:18 GMT]] Body:0xc00150ab80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00043ec80 TLS:<nil>}
I1027 19:12:18.408356   69030 retry.go:31] will retry after 26.020951ms: Temporary Error: unexpected response code: 503
I1027 19:12:18.439749   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2c65fbf5-2718-45a8-b9f1-2ab6b5624b59] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:12:18 GMT]] Body:0xc000c212c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00043edc0 TLS:<nil>}
I1027 19:12:18.439816   69030 retry.go:31] will retry after 38.879021ms: Temporary Error: unexpected response code: 503
I1027 19:12:18.490401   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c7f53bc3-8859-48b9-a924-4dc9c714758a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:12:18 GMT]] Body:0xc000c213c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00029f2c0 TLS:<nil>}
I1027 19:12:18.490498   69030 retry.go:31] will retry after 141.019157ms: Temporary Error: unexpected response code: 503
I1027 19:12:18.638521   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a9a085d9-6d5f-4ff6-a3b9-5b17cd71c59c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:12:18 GMT]] Body:0xc00150ac80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00029f400 TLS:<nil>}
I1027 19:12:18.638592   69030 retry.go:31] will retry after 151.623105ms: Temporary Error: unexpected response code: 503
I1027 19:12:18.795811   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[38eaadc8-0cee-4c7c-a490-c238133d4741] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:12:18 GMT]] Body:0xc00150ad40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00043ef00 TLS:<nil>}
I1027 19:12:18.795885   69030 retry.go:31] will retry after 186.5881ms: Temporary Error: unexpected response code: 503
I1027 19:12:18.986842   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5aa2806d-c9f5-4df1-bb7a-7ef6367ae58b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:12:18 GMT]] Body:0xc000c21500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00043f180 TLS:<nil>}
I1027 19:12:18.986922   69030 retry.go:31] will retry after 452.385581ms: Temporary Error: unexpected response code: 503
I1027 19:12:19.442921   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[251cfd1d-033f-4c47-94a1-9036b9b85e9b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:12:19 GMT]] Body:0xc00150ae00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00029f540 TLS:<nil>}
I1027 19:12:19.443007   69030 retry.go:31] will retry after 729.144823ms: Temporary Error: unexpected response code: 503
I1027 19:12:20.175501   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1684c181-a76b-467b-ba28-fe031ff0f20e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:12:20 GMT]] Body:0xc000c21600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00043f2c0 TLS:<nil>}
I1027 19:12:20.175601   69030 retry.go:31] will retry after 925.651814ms: Temporary Error: unexpected response code: 503
I1027 19:12:21.106325   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[515e48f4-1045-41ee-94b4-b6c097e48c73] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:12:21 GMT]] Body:0xc00158ad00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00040db80 TLS:<nil>}
I1027 19:12:21.106395   69030 retry.go:31] will retry after 1.333142348s: Temporary Error: unexpected response code: 503
I1027 19:12:22.444171   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3c08c6a5-1336-4d87-a0ab-677c3b5971f7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:12:22 GMT]] Body:0xc0016bab80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000310000 TLS:<nil>}
I1027 19:12:22.444242   69030 retry.go:31] will retry after 1.062250469s: Temporary Error: unexpected response code: 503
I1027 19:12:23.510695   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[66561df0-676f-4910-a539-d42e813ef00b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:12:23 GMT]] Body:0xc000c216c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00040dcc0 TLS:<nil>}
I1027 19:12:23.510775   69030 retry.go:31] will retry after 3.031356649s: Temporary Error: unexpected response code: 503
I1027 19:12:26.546445   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4ba8856d-5de4-4227-885d-1c02c4a74b33] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:12:26 GMT]] Body:0xc0016bac80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00029f680 TLS:<nil>}
I1027 19:12:26.546516   69030 retry.go:31] will retry after 4.956668034s: Temporary Error: unexpected response code: 503
I1027 19:12:31.506835   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6b20db3e-a05c-4f28-8825-08b870e9fab5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:12:31 GMT]] Body:0xc00158ae00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00029f900 TLS:<nil>}
I1027 19:12:31.506905   69030 retry.go:31] will retry after 4.853135399s: Temporary Error: unexpected response code: 503
I1027 19:12:36.365235   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[629d10bd-d88b-414e-9826-5666d7307f32] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:12:36 GMT]] Body:0xc000c21840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000310140 TLS:<nil>}
I1027 19:12:36.365325   69030 retry.go:31] will retry after 9.878951364s: Temporary Error: unexpected response code: 503
I1027 19:12:46.247961   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[54b7e57f-4824-4476-9eb1-b8afa96c77cc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:12:46 GMT]] Body:0xc00158af00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00029fa40 TLS:<nil>}
I1027 19:12:46.248065   69030 retry.go:31] will retry after 14.269993409s: Temporary Error: unexpected response code: 503
I1027 19:13:00.521975   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[20555ee2-e2da-4c3f-b9a3-3151b8306047] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:13:00 GMT]] Body:0xc0016bad00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000310280 TLS:<nil>}
I1027 19:13:00.522084   69030 retry.go:31] will retry after 19.290461126s: Temporary Error: unexpected response code: 503
I1027 19:13:19.818642   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b38d654b-331c-4897-9586-5fc9fa2df6a6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:13:19 GMT]] Body:0xc000c21940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003103c0 TLS:<nil>}
I1027 19:13:19.818710   69030 retry.go:31] will retry after 21.951798491s: Temporary Error: unexpected response code: 503
I1027 19:13:41.773896   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[da166a3a-6efa-463c-8b0d-a42e41cc8052] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:13:41 GMT]] Body:0xc0016badc0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00029fb80 TLS:<nil>}
I1027 19:13:41.774072   69030 retry.go:31] will retry after 39.048733415s: Temporary Error: unexpected response code: 503
I1027 19:14:20.829395   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[80c79dab-dd9c-442a-b0ee-e56e08f40f8d] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:14:20 GMT]] Body:0xc0016ba040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00029e000 TLS:<nil>}
I1027 19:14:20.829498   69030 retry.go:31] will retry after 52.820024678s: Temporary Error: unexpected response code: 503
I1027 19:15:13.655424   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ac68655f-3251-4fad-a015-a56a0e3b4a32] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:15:13 GMT]] Body:0xc000c200c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00040c780 TLS:<nil>}
I1027 19:15:13.655541   69030 retry.go:31] will retry after 1m28.145287587s: Temporary Error: unexpected response code: 503
I1027 19:16:41.805695   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a0ffbee2-5f0d-4f69-a041-9f7b40ec4ce6] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:16:41 GMT]] Body:0xc000c200c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00029e140 TLS:<nil>}
I1027 19:16:41.805803   69030 retry.go:31] will retry after 32.516818795s: Temporary Error: unexpected response code: 503
I1027 19:17:14.329218   69030 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[54ae7e62-d557-4d5a-bb5b-df24e738682e] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 27 Oct 2025 19:17:14 GMT]] Body:0xc0016ba080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00040c8c0 TLS:<nil>}
I1027 19:17:14.329315   69030 retry.go:31] will retry after 1m18.649680533s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-074768 -n functional-074768
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-074768 logs -n 25: (1.552904297s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-074768 image ls                                                                                                                                   │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │ 27 Oct 25 19:12 UTC │
	│ image   │ functional-074768 image save kicbase/echo-server:functional-074768 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │ 27 Oct 25 19:12 UTC │
	│ image   │ functional-074768 image rm kicbase/echo-server:functional-074768 --alsologtostderr                                                                           │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │ 27 Oct 25 19:12 UTC │
	│ image   │ functional-074768 image ls                                                                                                                                   │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │ 27 Oct 25 19:12 UTC │
	│ image   │ functional-074768 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │ 27 Oct 25 19:12 UTC │
	│ image   │ functional-074768 image ls                                                                                                                                   │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │ 27 Oct 25 19:12 UTC │
	│ image   │ functional-074768 image save --daemon kicbase/echo-server:functional-074768 --alsologtostderr                                                                │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │ 27 Oct 25 19:12 UTC │
	│ ssh     │ functional-074768 ssh stat /mount-9p/created-by-test                                                                                                         │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │ 27 Oct 25 19:12 UTC │
	│ ssh     │ functional-074768 ssh stat /mount-9p/created-by-pod                                                                                                          │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │ 27 Oct 25 19:12 UTC │
	│ ssh     │ functional-074768 ssh sudo umount -f /mount-9p                                                                                                               │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │ 27 Oct 25 19:12 UTC │
	│ mount   │ -p functional-074768 /tmp/TestFunctionalparallelMountCmdspecific-port3318931296/001:/mount-9p --alsologtostderr -v=1 --port 46464                            │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │                     │
	│ ssh     │ functional-074768 ssh findmnt -T /mount-9p | grep 9p                                                                                                         │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │                     │
	│ ssh     │ functional-074768 ssh findmnt -T /mount-9p | grep 9p                                                                                                         │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │ 27 Oct 25 19:12 UTC │
	│ ssh     │ functional-074768 ssh -- ls -la /mount-9p                                                                                                                    │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │ 27 Oct 25 19:12 UTC │
	│ ssh     │ functional-074768 ssh sudo umount -f /mount-9p                                                                                                               │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │                     │
	│ ssh     │ functional-074768 ssh findmnt -T /mount1                                                                                                                     │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │                     │
	│ mount   │ -p functional-074768 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2393103263/001:/mount1 --alsologtostderr -v=1                                           │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │                     │
	│ mount   │ -p functional-074768 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2393103263/001:/mount3 --alsologtostderr -v=1                                           │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │                     │
	│ mount   │ -p functional-074768 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2393103263/001:/mount2 --alsologtostderr -v=1                                           │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │                     │
	│ ssh     │ functional-074768 ssh findmnt -T /mount1                                                                                                                     │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │ 27 Oct 25 19:12 UTC │
	│ ssh     │ functional-074768 ssh findmnt -T /mount2                                                                                                                     │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │ 27 Oct 25 19:12 UTC │
	│ ssh     │ functional-074768 ssh findmnt -T /mount3                                                                                                                     │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │ 27 Oct 25 19:12 UTC │
	│ mount   │ -p functional-074768 --kill=true                                                                                                                             │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │                     │
	│ addons  │ functional-074768 addons list                                                                                                                                │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │ 27 Oct 25 19:12 UTC │
	│ addons  │ functional-074768 addons list -o json                                                                                                                        │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │ 27 Oct 25 19:12 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 19:12:15
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 19:12:15.978537   69010 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:12:15.978805   69010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:12:15.978815   69010 out.go:374] Setting ErrFile to fd 2...
	I1027 19:12:15.978819   69010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:12:15.979054   69010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-58821/.minikube/bin
	I1027 19:12:15.979492   69010 out.go:368] Setting JSON to false
	I1027 19:12:15.980424   69010 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6886,"bootTime":1761585450,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 19:12:15.980516   69010 start.go:141] virtualization: kvm guest
	I1027 19:12:15.985372   69010 out.go:179] * [functional-074768] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 19:12:15.986974   69010 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:12:15.986983   69010 notify.go:220] Checking for updates...
	I1027 19:12:15.988280   69010 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:12:15.989521   69010 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-58821/kubeconfig
	I1027 19:12:15.990743   69010 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-58821/.minikube
	I1027 19:12:15.991901   69010 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 19:12:15.993094   69010 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:12:15.994971   69010 config.go:182] Loaded profile config "functional-074768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:12:15.995574   69010 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:12:16.035343   69010 out.go:179] * Using the kvm2 driver based on existing profile
	I1027 19:12:16.036772   69010 start.go:305] selected driver: kvm2
	I1027 19:12:16.036792   69010 start.go:925] validating driver "kvm2" against &{Name:functional-074768 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-074768 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:12:16.036933   69010 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:12:16.038400   69010 cni.go:84] Creating CNI manager for ""
	I1027 19:12:16.038475   69010 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 19:12:16.038550   69010 start.go:349] cluster config:
	{Name:functional-074768 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-074768 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:12:16.040073   69010 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 27 19:17:16 functional-074768 crio[5469]: time="2025-10-27 19:17:16.842576123Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761592636842472815,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:175576,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=39bc7c85-7196-4bd0-b1e8-9233e68a108d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:17:16 functional-074768 crio[5469]: time="2025-10-27 19:17:16.844092299Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1105bf67-2868-4f9b-bf30-37fb11750108 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:17:16 functional-074768 crio[5469]: time="2025-10-27 19:17:16.844312669Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1105bf67-2868-4f9b-bf30-37fb11750108 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:17:16 functional-074768 crio[5469]: time="2025-10-27 19:17:16.844841245Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1324210b99e8b9f9c9065aa41dc3791c8db409d8aa5ab8f236b2399ccb7bdb7,PodSandboxId:542a950a6c94060e92400d067bc90eb44972884fe88539e14c8e6369202899ab,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761592369891065713,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c84388b6-2d7c-40a2-b560-fd225b55349a,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e03d8878eeba1200689ad3ffc75889ca6da8adb3ca6422abe3c15b802418bfe,PodSandboxId:4f2d4092ef8ce6f2ecebbd65772d6c0e4c75becabf6c5c71e3a7ab8a6578edd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761592310407165550,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp2k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538c55aa-9e90-4bb3-83b7-f84cce86edca,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b0eb17be9e97894e47220b4bbb2efa859e1155cbe7cff15e03d68b41c28284,PodSandboxId:f6ffd9e1d08daec9d8774fc7454fb6003f75ac9cc21e418870e28db77f9bf746,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761592310403081749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ca78600-3d29-4edc-9a1c-572cf646e83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:610b55ad8d57b43b4f9f14aca129a7c3f93464db3fb0040f0869e97828d3942e,PodSandboxId:ea9b6a12a7c9a66ed0d575bd7da37ec71c0a5c995cb086a4d164a6214f32b7a4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761592307034057590,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c94e58bf276ae150ba3be616b5d9315d,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba435049ad20a4af85b2f2d201801a5da9edf37f9f61b27284ccb0c798fbbaf,PodSandboxId:3da3325cd282ffb88aa1eed45cb1758e8dd5886923e0537475bf8b2062a708f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761592306825227460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d75616c8b2e6db9ba925e56dac14f36d,},Annotations:map[string]string{io.kubern
etes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad24ef975b266d69bc17f4c4acc398bd352926f4e8a86e50c84e2a2bcd7f3c5,PodSandboxId:62e2964894758c1aaed419fd373fdedd326f471efed3168345a8f2e5c1a18256,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761592306781617058,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074768,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 84d9d1d3cdf44b76588eee6ed2c2ed23,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c257ffa5d58fa9fcc17ef7974358549bf09938dff7426d6a957c613b8071e622,PodSandboxId:44c1dc33c037b1964e803b03fbe87ef48ced6c62d21ce6bbe99abb10c7d606bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761592306763270982,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1419f6f2dbf7cdc64e36c7697d572358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b855533e3a4aa01919810611cc1998e2734d17f27ed9d4381f184b8ae2c789e,PodSandboxId:c0d55bbc8d1fa75dc2bac99e9e6c5070f6a924127df831c2e344b53b4d4dcf73,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,Crea
tedAt:1761592304383828785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lv8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a05ec0c93cfd736d12c8313747186f27db2021dc9a371d8d3a4438f992549069,PodSandboxId:4f2d4092ef8ce6f2ecebbd65772d6c0e4c7
5becabf6c5c71e3a7ab8a6578edd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761592303924560683,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp2k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538c55aa-9e90-4bb3-83b7-f84cce86edca,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba777ad788f45b33f81f28d7aecebc6c59953bb838ed7f23c33832f2534a5179,PodSandboxId:f27f5919b6af890342c38a8769648e9c4a65ce64cb55dea1cf0342776751bca4,Meta
data:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761592275224568234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ca78600-3d29-4edc-9a1c-572cf646e83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6a57ec258cde7cb5f391bf83a85ba008e698c156047225611c8eab6c6babc0,PodSandboxId:c1e8123b0e1087e5aab8a8a7c7bcef0af0672ff02d81366f3f783974115ad5e4,Metadata:&Contain
erMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761592271488084282,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1419f6f2dbf7cdc64e36c7697d572358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa2d3b5fc751c052fe689b7cd2e4fd4d601c169b76aa924543645ff13d971195,PodSandboxId:af345
d07331bc89836a28c1110a92060b79b70e6fc12a33f95e0f664b95113e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761592271427253610,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84d9d1d3cdf44b76588eee6ed2c2ed23,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:60dc886cc82b05339032a646f43bb214cf04fc6adf15adc4e8500e3ad3d16a78,PodSandboxId:325376ebe73d19cdba4b91a0c3b50e525bc1fe4cedb8425acabd64180bb74da3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761592271436590190,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d75616c8b2e6db9ba925e56dac14f36d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b88ac5c8b637643251fc39d80e23070b4065a79ea8af3e53205465bf22ed227c,PodSandboxId:a4731d6d18ca9bedf0f80e8d150c0dd35d01d2a54d12375e767efe9ab4d14ac1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761592267049795999,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lv8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1105bf67-2868-4f9b-bf30-37fb11750108 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:17:16 functional-074768 crio[5469]: time="2025-10-27 19:17:16.874953748Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=3f68d9c6-5b18-4d5f-8bec-05b2808bb6f3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 27 19:17:16 functional-074768 crio[5469]: time="2025-10-27 19:17:16.876177289Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d56420b7f715574c31b43316b51b04c2b94e1cfb7246c8a1104944b40b115c32,Metadata:&PodSandboxMetadata{Name:mysql-5bb876957f-zrxgm,Uid:3384566f-1f7b-49e8-b729-a97f0e0924c2,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761592375906435478,Labels:map[string]string{app: mysql,io.kubernetes.container.name: POD,io.kubernetes.pod.name: mysql-5bb876957f-zrxgm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3384566f-1f7b-49e8-b729-a97f0e0924c2,pod-template-hash: 5bb876957f,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T19:12:55.587831795Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:613b0c218ea429b9f1818fed145baebdd97c524cc2145c08c1ea106343ca0941,Metadata:&PodSandboxMetadata{Name:sp-pod,Uid:33487490-a188-40e0-957c-3ebacba05ea4,Namespace:default,Attempt:0,},State:SANDBOX_READY,Cre
atedAt:1761592348074423170,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33487490-a188-40e0-957c-3ebacba05ea4,test: storage-provisioner,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"test\":\"storage-provisioner\"},\"name\":\"sp-pod\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"image\":\"docker.io/nginx\",\"name\":\"myfrontend\",\"volumeMounts\":[{\"mountPath\":\"/tmp/mount\",\"name\":\"mypd\"}]}],\"volumes\":[{\"name\":\"mypd\",\"persistentVolumeClaim\":{\"claimName\":\"myclaim\"}}]}}\n,kubernetes.io/config.seen: 2025-10-27T19:12:27.748292708Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:01dbe72b24dcc54f8aea55e2f60d490f65db5fe006c57c83fbb713ed18c594d5,Metadata:&PodSandboxMetadata{Name:dashboard-metrics-scraper-77bf4d6c4c-vfwcs,Uid:925f4e38-9e4f-48d5-8c
9c-0074a4032738,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761592338258746316,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: dashboard-metrics-scraper-77bf4d6c4c-vfwcs,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 925f4e38-9e4f-48d5-8c9c-0074a4032738,k8s-app: dashboard-metrics-scraper,pod-template-hash: 77bf4d6c4c,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T19:12:17.908761469Z,kubernetes.io/config.source: api,seccomp.security.alpha.kubernetes.io/pod: runtime/default,},RuntimeHandler:,},&PodSandbox{Id:83842179dc31f4523cc0411bddfa6b37965a6905316e204c7b8ef56b1724b901,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-855c9754f9-7xqm9,Uid:672ab189-2efb-4820-827e-d59baf07200c,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761592338141944294,Labels:map[string]string{gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashbo
ard-855c9754f9-7xqm9,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 672ab189-2efb-4820-827e-d59baf07200c,k8s-app: kubernetes-dashboard,pod-template-hash: 855c9754f9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T19:12:17.820803917Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:542a950a6c94060e92400d067bc90eb44972884fe88539e14c8e6369202899ab,Metadata:&PodSandboxMetadata{Name:busybox-mount,Uid:c84388b6-2d7c-40a2-b560-fd225b55349a,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1761592336588533506,Labels:map[string]string{integration-test: busybox-mount,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c84388b6-2d7c-40a2-b560-fd225b55349a,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T19:12:16.244405966Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:adf7d8424b128ad074599710dce54cb6fd82994103c9dc81cc985c
84956ba843,Metadata:&PodSandboxMetadata{Name:hello-node-connect-7d85dfc575-ppg9q,Uid:66fec0d9-6763-4ac7-be30-631c20dcc46e,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761592336507950496,Labels:map[string]string{app: hello-node-connect,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-ppg9q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 66fec0d9-6763-4ac7-be30-631c20dcc46e,pod-template-hash: 7d85dfc575,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T19:12:16.185388855Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ea9b6a12a7c9a66ed0d575bd7da37ec71c0a5c995cb086a4d164a6214f32b7a4,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-074768,Uid:c94e58bf276ae150ba3be616b5d9315d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761592306786245440,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-074768
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c94e58bf276ae150ba3be616b5d9315d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.117:8441,kubernetes.io/config.hash: c94e58bf276ae150ba3be616b5d9315d,kubernetes.io/config.seen: 2025-10-27T19:11:46.087739289Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c0d55bbc8d1fa75dc2bac99e9e6c5070f6a924127df831c2e344b53b4d4dcf73,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-2lv8d,Uid:5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1761592303771543515,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-2lv8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T19:11:14.909380586Z,kubern
etes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:62e2964894758c1aaed419fd373fdedd326f471efed3168345a8f2e5c1a18256,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-074768,Uid:84d9d1d3cdf44b76588eee6ed2c2ed23,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1761592303516321516,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84d9d1d3cdf44b76588eee6ed2c2ed23,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 84d9d1d3cdf44b76588eee6ed2c2ed23,kubernetes.io/config.seen: 2025-10-27T19:11:10.913077080Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:44c1dc33c037b1964e803b03fbe87ef48ced6c62d21ce6bbe99abb10c7d606bf,Metadata:&PodSandboxMetadata{Name:etcd-functional-074768,Uid:1419f6f2dbf7cdc64e36c7697d572358,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:17615923
03416726160,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1419f6f2dbf7cdc64e36c7697d572358,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.117:2379,kubernetes.io/config.hash: 1419f6f2dbf7cdc64e36c7697d572358,kubernetes.io/config.seen: 2025-10-27T19:11:10.913078349Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3da3325cd282ffb88aa1eed45cb1758e8dd5886923e0537475bf8b2062a708f7,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-074768,Uid:d75616c8b2e6db9ba925e56dac14f36d,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1761592303359525554,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d75616c8b2e6db9ba925e56dac14f36d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d75616c8b2e6db9ba925e56dac14f36d,kubernetes.io/config.seen: 2025-10-27T19:11:10.913075790Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f6ffd9e1d08daec9d8774fc7454fb6003f75ac9cc21e418870e28db77f9bf746,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8ca78600-3d29-4edc-9a1c-572cf646e83e,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1761592303344203137,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ca78600-3d29-4edc-9a1c-572cf646e83e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcil
e\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-10-27T19:11:14.909379059Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4f2d4092ef8ce6f2ecebbd65772d6c0e4c75becabf6c5c71e3a7ab8a6578edd1,Metadata:&PodSandboxMetadata{Name:kube-proxy-lp2k8,Uid:538c55aa-9e90-4bb3-83b7-f84cce86edca,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1761592303330578361,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-p
roxy-lp2k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538c55aa-9e90-4bb3-83b7-f84cce86edca,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T19:11:14.909362891Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a4731d6d18ca9bedf0f80e8d150c0dd35d01d2a54d12375e767efe9ab4d14ac1,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-2lv8d,Uid:5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1761592265769988825,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-2lv8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T19:10:11.270978707Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f27f5919b6af890342c38a8769648e9c4a65ce64
cb55dea1cf0342776751bca4,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8ca78600-3d29-4edc-9a1c-572cf646e83e,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1761592265417417883,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ca78600-3d29-4edc-9a1c-572cf646e83e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMoun
ts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-10-27T19:10:13.133926543Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:af345d07331bc89836a28c1110a92060b79b70e6fc12a33f95e0f664b95113e3,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-074768,Uid:84d9d1d3cdf44b76588eee6ed2c2ed23,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1761592265238831082,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84d9d1d3cdf44b76588eee6ed2c2ed23,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 84d9d1d3cdf44b76588eee6ed2c2ed23,kubernetes.io/config.seen: 2025-10-27T19:10:05.776640521Z,kubernetes.io/config.
source: file,},RuntimeHandler:,},&PodSandbox{Id:c1e8123b0e1087e5aab8a8a7c7bcef0af0672ff02d81366f3f783974115ad5e4,Metadata:&PodSandboxMetadata{Name:etcd-functional-074768,Uid:1419f6f2dbf7cdc64e36c7697d572358,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1761592265222590190,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1419f6f2dbf7cdc64e36c7697d572358,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.117:2379,kubernetes.io/config.hash: 1419f6f2dbf7cdc64e36c7697d572358,kubernetes.io/config.seen: 2025-10-27T19:10:05.776622329Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:325376ebe73d19cdba4b91a0c3b50e525bc1fe4cedb8425acabd64180bb74da3,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-074768,Uid:d75616c8b2e6db9ba925e56dac14f36d,Namespace:kube-s
ystem,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1761592265200320071,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d75616c8b2e6db9ba925e56dac14f36d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d75616c8b2e6db9ba925e56dac14f36d,kubernetes.io/config.seen: 2025-10-27T19:10:05.776639300Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=3f68d9c6-5b18-4d5f-8bec-05b2808bb6f3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 27 19:17:16 functional-074768 crio[5469]: time="2025-10-27 19:17:16.877223461Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f1824375-b39d-4ffb-b47c-809339afd379 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:17:16 functional-074768 crio[5469]: time="2025-10-27 19:17:16.877312635Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f1824375-b39d-4ffb-b47c-809339afd379 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:17:16 functional-074768 crio[5469]: time="2025-10-27 19:17:16.877607701Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1324210b99e8b9f9c9065aa41dc3791c8db409d8aa5ab8f236b2399ccb7bdb7,PodSandboxId:542a950a6c94060e92400d067bc90eb44972884fe88539e14c8e6369202899ab,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761592369891065713,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c84388b6-2d7c-40a2-b560-fd225b55349a,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e03d8878eeba1200689ad3ffc75889ca6da8adb3ca6422abe3c15b802418bfe,PodSandboxId:4f2d4092ef8ce6f2ecebbd65772d6c0e4c75becabf6c5c71e3a7ab8a6578edd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761592310407165550,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp2k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538c55aa-9e90-4bb3-83b7-f84cce86edca,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b0eb17be9e97894e47220b4bbb2efa859e1155cbe7cff15e03d68b41c28284,PodSandboxId:f6ffd9e1d08daec9d8774fc7454fb6003f75ac9cc21e418870e28db77f9bf746,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761592310403081749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ca78600-3d29-4edc-9a1c-572cf646e83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:610b55ad8d57b43b4f9f14aca129a7c3f93464db3fb0040f0869e97828d3942e,PodSandboxId:ea9b6a12a7c9a66ed0d575bd7da37ec71c0a5c995cb086a4d164a6214f32b7a4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761592307034057590,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c94e58bf276ae150ba3be616b5d9315d,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba435049ad20a4af85b2f2d201801a5da9edf37f9f61b27284ccb0c798fbbaf,PodSandboxId:3da3325cd282ffb88aa1eed45cb1758e8dd5886923e0537475bf8b2062a708f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761592306825227460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d75616c8b2e6db9ba925e56dac14f36d,},Annotations:map[string]string{io.kubern
etes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad24ef975b266d69bc17f4c4acc398bd352926f4e8a86e50c84e2a2bcd7f3c5,PodSandboxId:62e2964894758c1aaed419fd373fdedd326f471efed3168345a8f2e5c1a18256,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761592306781617058,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074768,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 84d9d1d3cdf44b76588eee6ed2c2ed23,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c257ffa5d58fa9fcc17ef7974358549bf09938dff7426d6a957c613b8071e622,PodSandboxId:44c1dc33c037b1964e803b03fbe87ef48ced6c62d21ce6bbe99abb10c7d606bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761592306763270982,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1419f6f2dbf7cdc64e36c7697d572358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b855533e3a4aa01919810611cc1998e2734d17f27ed9d4381f184b8ae2c789e,PodSandboxId:c0d55bbc8d1fa75dc2bac99e9e6c5070f6a924127df831c2e344b53b4d4dcf73,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,Crea
tedAt:1761592304383828785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lv8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a05ec0c93cfd736d12c8313747186f27db2021dc9a371d8d3a4438f992549069,PodSandboxId:4f2d4092ef8ce6f2ecebbd65772d6c0e4c7
5becabf6c5c71e3a7ab8a6578edd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761592303924560683,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp2k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538c55aa-9e90-4bb3-83b7-f84cce86edca,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba777ad788f45b33f81f28d7aecebc6c59953bb838ed7f23c33832f2534a5179,PodSandboxId:f27f5919b6af890342c38a8769648e9c4a65ce64cb55dea1cf0342776751bca4,Meta
data:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761592275224568234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ca78600-3d29-4edc-9a1c-572cf646e83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6a57ec258cde7cb5f391bf83a85ba008e698c156047225611c8eab6c6babc0,PodSandboxId:c1e8123b0e1087e5aab8a8a7c7bcef0af0672ff02d81366f3f783974115ad5e4,Metadata:&Contain
erMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761592271488084282,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1419f6f2dbf7cdc64e36c7697d572358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa2d3b5fc751c052fe689b7cd2e4fd4d601c169b76aa924543645ff13d971195,PodSandboxId:af345
d07331bc89836a28c1110a92060b79b70e6fc12a33f95e0f664b95113e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761592271427253610,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84d9d1d3cdf44b76588eee6ed2c2ed23,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:60dc886cc82b05339032a646f43bb214cf04fc6adf15adc4e8500e3ad3d16a78,PodSandboxId:325376ebe73d19cdba4b91a0c3b50e525bc1fe4cedb8425acabd64180bb74da3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761592271436590190,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d75616c8b2e6db9ba925e56dac14f36d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b88ac5c8b637643251fc39d80e23070b4065a79ea8af3e53205465bf22ed227c,PodSandboxId:a4731d6d18ca9bedf0f80e8d150c0dd35d01d2a54d12375e767efe9ab4d14ac1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761592267049795999,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lv8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f1824375-b39d-4ffb-b47c-809339afd379 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:17:16 functional-074768 crio[5469]: time="2025-10-27 19:17:16.883787847Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=279f50d5-7f52-4672-a0e2-e8fed0f73b49 name=/runtime.v1.RuntimeService/Version
	Oct 27 19:17:16 functional-074768 crio[5469]: time="2025-10-27 19:17:16.883906952Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=279f50d5-7f52-4672-a0e2-e8fed0f73b49 name=/runtime.v1.RuntimeService/Version
	Oct 27 19:17:16 functional-074768 crio[5469]: time="2025-10-27 19:17:16.885281546Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e07727cd-94f3-4925-9e9a-5857f7d450af name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:17:16 functional-074768 crio[5469]: time="2025-10-27 19:17:16.885960284Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761592636885937392,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:175576,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e07727cd-94f3-4925-9e9a-5857f7d450af name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:17:16 functional-074768 crio[5469]: time="2025-10-27 19:17:16.886619786Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=abd3a165-7004-4a3e-84a4-82fe5e8e50d3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:17:16 functional-074768 crio[5469]: time="2025-10-27 19:17:16.886724329Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=abd3a165-7004-4a3e-84a4-82fe5e8e50d3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:17:16 functional-074768 crio[5469]: time="2025-10-27 19:17:16.886979991Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1324210b99e8b9f9c9065aa41dc3791c8db409d8aa5ab8f236b2399ccb7bdb7,PodSandboxId:542a950a6c94060e92400d067bc90eb44972884fe88539e14c8e6369202899ab,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761592369891065713,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c84388b6-2d7c-40a2-b560-fd225b55349a,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e03d8878eeba1200689ad3ffc75889ca6da8adb3ca6422abe3c15b802418bfe,PodSandboxId:4f2d4092ef8ce6f2ecebbd65772d6c0e4c75becabf6c5c71e3a7ab8a6578edd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761592310407165550,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp2k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538c55aa-9e90-4bb3-83b7-f84cce86edca,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b0eb17be9e97894e47220b4bbb2efa859e1155cbe7cff15e03d68b41c28284,PodSandboxId:f6ffd9e1d08daec9d8774fc7454fb6003f75ac9cc21e418870e28db77f9bf746,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761592310403081749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ca78600-3d29-4edc-9a1c-572cf646e83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:610b55ad8d57b43b4f9f14aca129a7c3f93464db3fb0040f0869e97828d3942e,PodSandboxId:ea9b6a12a7c9a66ed0d575bd7da37ec71c0a5c995cb086a4d164a6214f32b7a4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761592307034057590,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c94e58bf276ae150ba3be616b5d9315d,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba435049ad20a4af85b2f2d201801a5da9edf37f9f61b27284ccb0c798fbbaf,PodSandboxId:3da3325cd282ffb88aa1eed45cb1758e8dd5886923e0537475bf8b2062a708f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761592306825227460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d75616c8b2e6db9ba925e56dac14f36d,},Annotations:map[string]string{io.kubern
etes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad24ef975b266d69bc17f4c4acc398bd352926f4e8a86e50c84e2a2bcd7f3c5,PodSandboxId:62e2964894758c1aaed419fd373fdedd326f471efed3168345a8f2e5c1a18256,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761592306781617058,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074768,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 84d9d1d3cdf44b76588eee6ed2c2ed23,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c257ffa5d58fa9fcc17ef7974358549bf09938dff7426d6a957c613b8071e622,PodSandboxId:44c1dc33c037b1964e803b03fbe87ef48ced6c62d21ce6bbe99abb10c7d606bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761592306763270982,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1419f6f2dbf7cdc64e36c7697d572358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b855533e3a4aa01919810611cc1998e2734d17f27ed9d4381f184b8ae2c789e,PodSandboxId:c0d55bbc8d1fa75dc2bac99e9e6c5070f6a924127df831c2e344b53b4d4dcf73,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,Crea
tedAt:1761592304383828785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lv8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a05ec0c93cfd736d12c8313747186f27db2021dc9a371d8d3a4438f992549069,PodSandboxId:4f2d4092ef8ce6f2ecebbd65772d6c0e4c7
5becabf6c5c71e3a7ab8a6578edd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761592303924560683,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp2k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538c55aa-9e90-4bb3-83b7-f84cce86edca,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba777ad788f45b33f81f28d7aecebc6c59953bb838ed7f23c33832f2534a5179,PodSandboxId:f27f5919b6af890342c38a8769648e9c4a65ce64cb55dea1cf0342776751bca4,Meta
data:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761592275224568234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ca78600-3d29-4edc-9a1c-572cf646e83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6a57ec258cde7cb5f391bf83a85ba008e698c156047225611c8eab6c6babc0,PodSandboxId:c1e8123b0e1087e5aab8a8a7c7bcef0af0672ff02d81366f3f783974115ad5e4,Metadata:&Contain
erMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761592271488084282,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1419f6f2dbf7cdc64e36c7697d572358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa2d3b5fc751c052fe689b7cd2e4fd4d601c169b76aa924543645ff13d971195,PodSandboxId:af345
d07331bc89836a28c1110a92060b79b70e6fc12a33f95e0f664b95113e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761592271427253610,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84d9d1d3cdf44b76588eee6ed2c2ed23,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:60dc886cc82b05339032a646f43bb214cf04fc6adf15adc4e8500e3ad3d16a78,PodSandboxId:325376ebe73d19cdba4b91a0c3b50e525bc1fe4cedb8425acabd64180bb74da3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761592271436590190,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d75616c8b2e6db9ba925e56dac14f36d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b88ac5c8b637643251fc39d80e23070b4065a79ea8af3e53205465bf22ed227c,PodSandboxId:a4731d6d18ca9bedf0f80e8d150c0dd35d01d2a54d12375e767efe9ab4d14ac1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761592267049795999,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lv8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=abd3a165-7004-4a3e-84a4-82fe5e8e50d3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:17:16 functional-074768 crio[5469]: time="2025-10-27 19:17:16.900260818Z" level=debug msg="GET https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da" file="docker/docker_client.go:631"
	Oct 27 19:17:16 functional-074768 crio[5469]: time="2025-10-27 19:17:16.911076653Z" level=debug msg="Too many requests to https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: sleeping for 8.000000 seconds before next attempt" file="docker/docker_client.go:596"
	Oct 27 19:17:16 functional-074768 crio[5469]: time="2025-10-27 19:17:16.937985740Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3664067b-60cd-4f4d-b6a8-6a58cb4b2eb9 name=/runtime.v1.RuntimeService/Version
	Oct 27 19:17:16 functional-074768 crio[5469]: time="2025-10-27 19:17:16.938393367Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3664067b-60cd-4f4d-b6a8-6a58cb4b2eb9 name=/runtime.v1.RuntimeService/Version
	Oct 27 19:17:16 functional-074768 crio[5469]: time="2025-10-27 19:17:16.940096783Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1b3f7aa6-88dd-4507-8a01-5d22e655716b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:17:16 functional-074768 crio[5469]: time="2025-10-27 19:17:16.941039662Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761592636941016874,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:175576,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1b3f7aa6-88dd-4507-8a01-5d22e655716b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:17:16 functional-074768 crio[5469]: time="2025-10-27 19:17:16.941628513Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b6391757-179e-4662-a1e3-4cafa5771539 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:17:16 functional-074768 crio[5469]: time="2025-10-27 19:17:16.941737704Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b6391757-179e-4662-a1e3-4cafa5771539 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:17:16 functional-074768 crio[5469]: time="2025-10-27 19:17:16.941996029Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1324210b99e8b9f9c9065aa41dc3791c8db409d8aa5ab8f236b2399ccb7bdb7,PodSandboxId:542a950a6c94060e92400d067bc90eb44972884fe88539e14c8e6369202899ab,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761592369891065713,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c84388b6-2d7c-40a2-b560-fd225b55349a,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e03d8878eeba1200689ad3ffc75889ca6da8adb3ca6422abe3c15b802418bfe,PodSandboxId:4f2d4092ef8ce6f2ecebbd65772d6c0e4c75becabf6c5c71e3a7ab8a6578edd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761592310407165550,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp2k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538c55aa-9e90-4bb3-83b7-f84cce86edca,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b0eb17be9e97894e47220b4bbb2efa859e1155cbe7cff15e03d68b41c28284,PodSandboxId:f6ffd9e1d08daec9d8774fc7454fb6003f75ac9cc21e418870e28db77f9bf746,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761592310403081749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ca78600-3d29-4edc-9a1c-572cf646e83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:610b55ad8d57b43b4f9f14aca129a7c3f93464db3fb0040f0869e97828d3942e,PodSandboxId:ea9b6a12a7c9a66ed0d575bd7da37ec71c0a5c995cb086a4d164a6214f32b7a4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761592307034057590,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c94e58bf276ae150ba3be616b5d9315d,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba435049ad20a4af85b2f2d201801a5da9edf37f9f61b27284ccb0c798fbbaf,PodSandboxId:3da3325cd282ffb88aa1eed45cb1758e8dd5886923e0537475bf8b2062a708f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761592306825227460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d75616c8b2e6db9ba925e56dac14f36d,},Annotations:map[string]string{io.kubern
etes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad24ef975b266d69bc17f4c4acc398bd352926f4e8a86e50c84e2a2bcd7f3c5,PodSandboxId:62e2964894758c1aaed419fd373fdedd326f471efed3168345a8f2e5c1a18256,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761592306781617058,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074768,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 84d9d1d3cdf44b76588eee6ed2c2ed23,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c257ffa5d58fa9fcc17ef7974358549bf09938dff7426d6a957c613b8071e622,PodSandboxId:44c1dc33c037b1964e803b03fbe87ef48ced6c62d21ce6bbe99abb10c7d606bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761592306763270982,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1419f6f2dbf7cdc64e36c7697d572358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b855533e3a4aa01919810611cc1998e2734d17f27ed9d4381f184b8ae2c789e,PodSandboxId:c0d55bbc8d1fa75dc2bac99e9e6c5070f6a924127df831c2e344b53b4d4dcf73,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,Crea
tedAt:1761592304383828785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lv8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a05ec0c93cfd736d12c8313747186f27db2021dc9a371d8d3a4438f992549069,PodSandboxId:4f2d4092ef8ce6f2ecebbd65772d6c0e4c7
5becabf6c5c71e3a7ab8a6578edd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761592303924560683,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp2k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538c55aa-9e90-4bb3-83b7-f84cce86edca,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba777ad788f45b33f81f28d7aecebc6c59953bb838ed7f23c33832f2534a5179,PodSandboxId:f27f5919b6af890342c38a8769648e9c4a65ce64cb55dea1cf0342776751bca4,Meta
data:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761592275224568234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ca78600-3d29-4edc-9a1c-572cf646e83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6a57ec258cde7cb5f391bf83a85ba008e698c156047225611c8eab6c6babc0,PodSandboxId:c1e8123b0e1087e5aab8a8a7c7bcef0af0672ff02d81366f3f783974115ad5e4,Metadata:&Contain
erMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761592271488084282,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1419f6f2dbf7cdc64e36c7697d572358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa2d3b5fc751c052fe689b7cd2e4fd4d601c169b76aa924543645ff13d971195,PodSandboxId:af345
d07331bc89836a28c1110a92060b79b70e6fc12a33f95e0f664b95113e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761592271427253610,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84d9d1d3cdf44b76588eee6ed2c2ed23,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:60dc886cc82b05339032a646f43bb214cf04fc6adf15adc4e8500e3ad3d16a78,PodSandboxId:325376ebe73d19cdba4b91a0c3b50e525bc1fe4cedb8425acabd64180bb74da3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761592271436590190,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d75616c8b2e6db9ba925e56dac14f36d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b88ac5c8b637643251fc39d80e23070b4065a79ea8af3e53205465bf22ed227c,PodSandboxId:a4731d6d18ca9bedf0f80e8d150c0dd35d01d2a54d12375e767efe9ab4d14ac1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761592267049795999,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lv8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b6391757-179e-4662-a1e3-4cafa5771539 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b1324210b99e8       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   4 minutes ago       Exited              mount-munger              0                   542a950a6c940       busybox-mount
	5e03d8878eeba       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      5 minutes ago       Running             kube-proxy                3                   4f2d4092ef8ce       kube-proxy-lp2k8
	d7b0eb17be9e9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       3                   f6ffd9e1d08da       storage-provisioner
	610b55ad8d57b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      5 minutes ago       Running             kube-apiserver            0                   ea9b6a12a7c9a       kube-apiserver-functional-074768
	1ba435049ad20       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      5 minutes ago       Running             kube-controller-manager   3                   3da3325cd282f       kube-controller-manager-functional-074768
	3ad24ef975b26       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      5 minutes ago       Running             kube-scheduler            3                   62e2964894758       kube-scheduler-functional-074768
	c257ffa5d58fa       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      5 minutes ago       Running             etcd                      3                   44c1dc33c037b       etcd-functional-074768
	8b855533e3a4a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      5 minutes ago       Running             coredns                   2                   c0d55bbc8d1fa       coredns-66bc5c9577-2lv8d
	a05ec0c93cfd7       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      5 minutes ago       Exited              kube-proxy                2                   4f2d4092ef8ce       kube-proxy-lp2k8
	ba777ad788f45       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Exited              storage-provisioner       2                   f27f5919b6af8       storage-provisioner
	8f6a57ec258cd       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      6 minutes ago       Exited              etcd                      2                   c1e8123b0e108       etcd-functional-074768
	60dc886cc82b0       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      6 minutes ago       Exited              kube-controller-manager   2                   325376ebe73d1       kube-controller-manager-functional-074768
	fa2d3b5fc751c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      6 minutes ago       Exited              kube-scheduler            2                   af345d07331bc       kube-scheduler-functional-074768
	b88ac5c8b6376       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      6 minutes ago       Exited              coredns                   1                   a4731d6d18ca9       coredns-66bc5c9577-2lv8d
	
	
	==> coredns [8b855533e3a4aa01919810611cc1998e2734d17f27ed9d4381f184b8ae2c789e] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45293 - 5246 "HINFO IN 454679650042713632.2272985414247109723. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.039332227s
	
	
	==> coredns [b88ac5c8b637643251fc39d80e23070b4065a79ea8af3e53205465bf22ed227c] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53381 - 18232 "HINFO IN 6855518255260926845.7182282404724631670. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.025903505s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-074768
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-074768
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=functional-074768
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T19_10_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 19:10:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-074768
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 19:17:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 19:16:05 +0000   Mon, 27 Oct 2025 19:10:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 19:16:05 +0000   Mon, 27 Oct 2025 19:10:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 19:16:05 +0000   Mon, 27 Oct 2025 19:10:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 19:16:05 +0000   Mon, 27 Oct 2025 19:10:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.117
	  Hostname:    functional-074768
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 d60c87697e45438394c451d2f7a36472
	  System UUID:                d60c8769-7e45-4383-94c4-51d2f7a36472
	  Boot ID:                    59f8f872-6752-425e-9853-f7970fb836c8
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-7d85dfc575-ppg9q           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  default                     mysql-5bb876957f-zrxgm                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    4m22s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 coredns-66bc5c9577-2lv8d                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m6s
	  kube-system                 etcd-functional-074768                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m12s
	  kube-system                 kube-apiserver-functional-074768              250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 kube-controller-manager-functional-074768     200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m13s
	  kube-system                 kube-proxy-lp2k8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m7s
	  kube-system                 kube-scheduler-functional-074768              100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m12s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m4s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-vfwcs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-7xqm9         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m4s                   kube-proxy       
	  Normal  Starting                 5m26s                  kube-proxy       
	  Normal  Starting                 6m                     kube-proxy       
	  Normal  NodeHasSufficientMemory  7m18s (x8 over 7m18s)  kubelet          Node functional-074768 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m18s (x8 over 7m18s)  kubelet          Node functional-074768 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m18s (x7 over 7m18s)  kubelet          Node functional-074768 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m12s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m11s                  kubelet          Node functional-074768 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    7m11s                  kubelet          Node functional-074768 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m11s                  kubelet          Node functional-074768 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  7m11s                  kubelet          Node functional-074768 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           7m8s                   node-controller  Node functional-074768 event: Registered Node functional-074768 in Controller
	  Normal  Starting                 6m7s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m6s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    6m6s (x8 over 6m6s)    kubelet          Node functional-074768 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m6s (x7 over 6m6s)    kubelet          Node functional-074768 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  6m6s (x8 over 6m6s)    kubelet          Node functional-074768 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           5m59s                  node-controller  Node functional-074768 event: Registered Node functional-074768 in Controller
	  Normal  Starting                 5m31s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m31s (x8 over 5m31s)  kubelet          Node functional-074768 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m31s (x8 over 5m31s)  kubelet          Node functional-074768 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m31s (x7 over 5m31s)  kubelet          Node functional-074768 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m24s                  node-controller  Node functional-074768 event: Registered Node functional-074768 in Controller
	
	
	==> dmesg <==
	[Oct27 19:09] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000050] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.009365] (rpcbind)[117]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.178192] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000026] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000004] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.089795] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.107366] kauditd_printk_skb: 130 callbacks suppressed
	[Oct27 19:10] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.009137] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.877352] kauditd_printk_skb: 249 callbacks suppressed
	[ +30.932062] kauditd_printk_skb: 38 callbacks suppressed
	[Oct27 19:11] kauditd_printk_skb: 350 callbacks suppressed
	[  +4.462417] kauditd_printk_skb: 81 callbacks suppressed
	[  +6.561811] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.109963] kauditd_printk_skb: 24 callbacks suppressed
	[  +0.392463] kauditd_printk_skb: 303 callbacks suppressed
	[  +1.946868] kauditd_printk_skb: 108 callbacks suppressed
	[Oct27 19:12] kauditd_printk_skb: 8 callbacks suppressed
	[  +1.008563] kauditd_printk_skb: 91 callbacks suppressed
	[  +0.000147] kauditd_printk_skb: 152 callbacks suppressed
	[ +19.538164] kauditd_printk_skb: 26 callbacks suppressed
	[  +3.845026] kauditd_printk_skb: 31 callbacks suppressed
	[Oct27 19:13] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [8f6a57ec258cde7cb5f391bf83a85ba008e698c156047225611c8eab6c6babc0] <==
	{"level":"warn","ts":"2025-10-27T19:11:13.796751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:13.803213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:13.820989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:13.827139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:13.850127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:13.864610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:13.965616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51752","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-27T19:11:34.886131Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-27T19:11:34.886250Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-074768","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.117:2380"],"advertise-client-urls":["https://192.168.39.117:2379"]}
	{"level":"error","ts":"2025-10-27T19:11:34.886308Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-27T19:11:34.969542Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-27T19:11:34.969623Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T19:11:34.969765Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d85ef093c7464643","current-leader-member-id":"d85ef093c7464643"}
	{"level":"info","ts":"2025-10-27T19:11:34.969860Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-27T19:11:34.969869Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-27T19:11:34.969972Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-27T19:11:34.970059Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-27T19:11:34.970068Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-27T19:11:34.970117Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.117:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-27T19:11:34.970124Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.117:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-27T19:11:34.970130Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.117:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T19:11:34.974242Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.117:2380"}
	{"level":"error","ts":"2025-10-27T19:11:34.974325Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.117:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T19:11:34.974349Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.117:2380"}
	{"level":"info","ts":"2025-10-27T19:11:34.974355Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-074768","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.117:2380"],"advertise-client-urls":["https://192.168.39.117:2379"]}
	
	
	==> etcd [c257ffa5d58fa9fcc17ef7974358549bf09938dff7426d6a957c613b8071e622] <==
	{"level":"warn","ts":"2025-10-27T19:11:48.670124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.696928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.713759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.731621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.774630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.777064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.803070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.831051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.844794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.869011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.906600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.925550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.942908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.960546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.988468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.001773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.013937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.028896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.043608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.055856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.067861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.085770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.088921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.104882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.204360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60912","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:17:17 up 7 min,  0 users,  load average: 1.23, 0.70, 0.34
	Linux functional-074768 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [610b55ad8d57b43b4f9f14aca129a7c3f93464db3fb0040f0869e97828d3942e] <==
	I1027 19:11:49.997070       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 19:11:49.997074       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 19:11:49.997079       1 cache.go:39] Caches are synced for autoregister controller
	I1027 19:11:49.997909       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 19:11:50.006465       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 19:11:50.020163       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1027 19:11:50.020385       1 policy_source.go:240] refreshing policies
	I1027 19:11:50.029748       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:11:50.038132       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 19:11:50.043717       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1027 19:11:50.135960       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 19:11:50.786831       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 19:11:51.391278       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 19:11:51.442935       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1027 19:11:51.470900       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 19:11:51.478456       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 19:11:53.405340       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 19:11:53.556350       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 19:11:56.001360       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 19:12:10.771036       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.125.173"}
	I1027 19:12:16.293068       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.108.218.230"}
	I1027 19:12:17.549055       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 19:12:18.028917       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.109.120"}
	I1027 19:12:18.065792       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.113.167"}
	I1027 19:12:55.519802       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.101.18.82"}
	
	
	==> kube-controller-manager [1ba435049ad20a4af85b2f2d201801a5da9edf37f9f61b27284ccb0c798fbbaf] <==
	I1027 19:11:53.315333       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1027 19:11:53.315385       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1027 19:11:53.315342       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1027 19:11:53.316605       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 19:11:53.318987       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 19:11:53.322538       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 19:11:53.326745       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 19:11:53.327969       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 19:11:53.330177       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 19:11:53.338701       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1027 19:11:53.344965       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:11:53.347598       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 19:11:53.352566       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 19:11:53.352728       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 19:11:53.352755       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 19:11:53.361178       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 19:11:53.366609       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	E1027 19:12:17.675984       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:12:17.699011       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:12:17.727301       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:12:17.729874       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:12:17.738567       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:12:17.749280       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:12:17.760742       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:12:17.769967       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [60dc886cc82b05339032a646f43bb214cf04fc6adf15adc4e8500e3ad3d16a78] <==
	I1027 19:11:18.053781       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1027 19:11:18.053872       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 19:11:18.053945       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 19:11:18.054071       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1027 19:11:18.054160       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 19:11:18.054285       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 19:11:18.055457       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 19:11:18.055619       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1027 19:11:18.056424       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 19:11:18.056636       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:11:18.056725       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 19:11:18.056740       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 19:11:18.065223       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 19:11:18.065380       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 19:11:18.065450       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-074768"
	I1027 19:11:18.065513       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1027 19:11:18.067483       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:11:18.067852       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1027 19:11:18.068636       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 19:11:18.071317       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 19:11:18.075240       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 19:11:18.078873       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1027 19:11:18.082257       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 19:11:18.095993       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 19:11:18.103590       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [5e03d8878eeba1200689ad3ffc75889ca6da8adb3ca6422abe3c15b802418bfe] <==
	I1027 19:11:50.619204       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 19:11:50.719751       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 19:11:50.719779       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.117"]
	E1027 19:11:50.719851       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 19:11:50.759219       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1027 19:11:50.759283       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1027 19:11:50.759314       1 server_linux.go:132] "Using iptables Proxier"
	I1027 19:11:50.769903       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 19:11:50.770217       1 server.go:527] "Version info" version="v1.34.1"
	I1027 19:11:50.770431       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:11:50.775267       1 config.go:200] "Starting service config controller"
	I1027 19:11:50.775302       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 19:11:50.775316       1 config.go:106] "Starting endpoint slice config controller"
	I1027 19:11:50.775319       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 19:11:50.775912       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 19:11:50.775940       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 19:11:50.781346       1 config.go:309] "Starting node config controller"
	I1027 19:11:50.785856       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 19:11:50.786257       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 19:11:50.876443       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 19:11:50.876173       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 19:11:50.876931       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [a05ec0c93cfd736d12c8313747186f27db2021dc9a371d8d3a4438f992549069] <==
	I1027 19:11:44.169183       1 server_linux.go:53] "Using iptables proxy"
	I1027 19:11:44.246720       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1027 19:11:44.249261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-074768&limit=500&resourceVersion=0\": dial tcp 192.168.39.117:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [3ad24ef975b266d69bc17f4c4acc398bd352926f4e8a86e50c84e2a2bcd7f3c5] <==
	I1027 19:11:48.514616       1 serving.go:386] Generated self-signed cert in-memory
	W1027 19:11:49.895123       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 19:11:49.895488       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 19:11:49.895519       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 19:11:49.895728       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 19:11:49.951216       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 19:11:49.952748       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:11:49.964352       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:11:49.964482       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:11:49.964562       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 19:11:49.964647       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 19:11:50.065435       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [fa2d3b5fc751c052fe689b7cd2e4fd4d601c169b76aa924543645ff13d971195] <==
	I1027 19:11:12.865255       1 serving.go:386] Generated self-signed cert in-memory
	W1027 19:11:14.587934       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 19:11:14.588023       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 19:11:14.588034       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 19:11:14.588040       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 19:11:14.713266       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 19:11:14.713415       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:11:14.715890       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:11:14.715936       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:11:14.716109       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 19:11:14.716175       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 19:11:14.817022       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:11:34.878344       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:11:34.880247       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1027 19:11:34.880379       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1027 19:11:34.880567       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1027 19:11:34.882011       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1027 19:11:34.882253       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 27 19:16:26 functional-074768 kubelet[6302]: E1027 19:16:26.658607    6302 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 27 19:16:26 functional-074768 kubelet[6302]: E1027 19:16:26.658704    6302 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 27 19:16:26 functional-074768 kubelet[6302]: E1027 19:16:26.658941    6302 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-vfwcs_kubernetes-dashboard(925f4e38-9e4f-48d5-8c9c-0074a4032738): ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 27 19:16:26 functional-074768 kubelet[6302]: E1027 19:16:26.658978    6302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vfwcs" podUID="925f4e38-9e4f-48d5-8c9c-0074a4032738"
	Oct 27 19:16:36 functional-074768 kubelet[6302]: E1027 19:16:36.352430    6302 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761592596351652438  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Oct 27 19:16:36 functional-074768 kubelet[6302]: E1027 19:16:36.352452    6302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761592596351652438  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Oct 27 19:16:39 functional-074768 kubelet[6302]: E1027 19:16:39.142396    6302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vfwcs" podUID="925f4e38-9e4f-48d5-8c9c-0074a4032738"
	Oct 27 19:16:46 functional-074768 kubelet[6302]: E1027 19:16:46.262410    6302 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod1419f6f2dbf7cdc64e36c7697d572358/crio-c1e8123b0e1087e5aab8a8a7c7bcef0af0672ff02d81366f3f783974115ad5e4: Error finding container c1e8123b0e1087e5aab8a8a7c7bcef0af0672ff02d81366f3f783974115ad5e4: Status 404 returned error can't find the container with id c1e8123b0e1087e5aab8a8a7c7bcef0af0672ff02d81366f3f783974115ad5e4
	Oct 27 19:16:46 functional-074768 kubelet[6302]: E1027 19:16:46.263036    6302 manager.go:1116] Failed to create existing container: /kubepods/burstable/podd75616c8b2e6db9ba925e56dac14f36d/crio-325376ebe73d19cdba4b91a0c3b50e525bc1fe4cedb8425acabd64180bb74da3: Error finding container 325376ebe73d19cdba4b91a0c3b50e525bc1fe4cedb8425acabd64180bb74da3: Status 404 returned error can't find the container with id 325376ebe73d19cdba4b91a0c3b50e525bc1fe4cedb8425acabd64180bb74da3
	Oct 27 19:16:46 functional-074768 kubelet[6302]: E1027 19:16:46.263322    6302 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod8ca78600-3d29-4edc-9a1c-572cf646e83e/crio-f27f5919b6af890342c38a8769648e9c4a65ce64cb55dea1cf0342776751bca4: Error finding container f27f5919b6af890342c38a8769648e9c4a65ce64cb55dea1cf0342776751bca4: Status 404 returned error can't find the container with id f27f5919b6af890342c38a8769648e9c4a65ce64cb55dea1cf0342776751bca4
	Oct 27 19:16:46 functional-074768 kubelet[6302]: E1027 19:16:46.263755    6302 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod5f607e8f-f4a5-475f-8bdb-d9c2889d5ada/crio-a4731d6d18ca9bedf0f80e8d150c0dd35d01d2a54d12375e767efe9ab4d14ac1: Error finding container a4731d6d18ca9bedf0f80e8d150c0dd35d01d2a54d12375e767efe9ab4d14ac1: Status 404 returned error can't find the container with id a4731d6d18ca9bedf0f80e8d150c0dd35d01d2a54d12375e767efe9ab4d14ac1
	Oct 27 19:16:46 functional-074768 kubelet[6302]: E1027 19:16:46.265888    6302 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod84d9d1d3cdf44b76588eee6ed2c2ed23/crio-af345d07331bc89836a28c1110a92060b79b70e6fc12a33f95e0f664b95113e3: Error finding container af345d07331bc89836a28c1110a92060b79b70e6fc12a33f95e0f664b95113e3: Status 404 returned error can't find the container with id af345d07331bc89836a28c1110a92060b79b70e6fc12a33f95e0f664b95113e3
	Oct 27 19:16:46 functional-074768 kubelet[6302]: E1027 19:16:46.354832    6302 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761592606354091131  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Oct 27 19:16:46 functional-074768 kubelet[6302]: E1027 19:16:46.354881    6302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761592606354091131  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Oct 27 19:16:56 functional-074768 kubelet[6302]: E1027 19:16:56.359108    6302 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761592616357852747  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Oct 27 19:16:56 functional-074768 kubelet[6302]: E1027 19:16:56.359141    6302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761592616357852747  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Oct 27 19:16:56 functional-074768 kubelet[6302]: E1027 19:16:56.759429    6302 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 27 19:16:56 functional-074768 kubelet[6302]: E1027 19:16:56.759475    6302 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 27 19:16:56 functional-074768 kubelet[6302]: E1027 19:16:56.759729    6302 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(33487490-a188-40e0-957c-3ebacba05ea4): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 27 19:16:56 functional-074768 kubelet[6302]: E1027 19:16:56.759783    6302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="33487490-a188-40e0-957c-3ebacba05ea4"
	Oct 27 19:17:06 functional-074768 kubelet[6302]: E1027 19:17:06.361102    6302 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761592626360643132  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Oct 27 19:17:06 functional-074768 kubelet[6302]: E1027 19:17:06.361152    6302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761592626360643132  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Oct 27 19:17:10 functional-074768 kubelet[6302]: E1027 19:17:10.138244    6302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="33487490-a188-40e0-957c-3ebacba05ea4"
	Oct 27 19:17:16 functional-074768 kubelet[6302]: E1027 19:17:16.363607    6302 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761592636362805400  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Oct 27 19:17:16 functional-074768 kubelet[6302]: E1027 19:17:16.363641    6302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761592636362805400  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	
	
	==> storage-provisioner [ba777ad788f45b33f81f28d7aecebc6c59953bb838ed7f23c33832f2534a5179] <==
	I1027 19:11:15.295791       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 19:11:15.304293       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 19:11:15.304711       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1027 19:11:15.307781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:11:18.764063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:11:23.031174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:11:26.630899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:11:29.685570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:11:32.711342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:11:32.720392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 19:11:32.720513       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 19:11:32.721774       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-074768_b4323ab1-ff60-4e7e-907d-d6d67bc9f70a!
	I1027 19:11:32.722233       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"da7c4387-b27a-4f7e-ae17-d5eda90d8a7d", APIVersion:"v1", ResourceVersion:"574", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-074768_b4323ab1-ff60-4e7e-907d-d6d67bc9f70a became leader
	W1027 19:11:32.734342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:11:32.743127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 19:11:32.822877       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-074768_b4323ab1-ff60-4e7e-907d-d6d67bc9f70a!
	W1027 19:11:34.746637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:11:34.755281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d7b0eb17be9e97894e47220b4bbb2efa859e1155cbe7cff15e03d68b41c28284] <==
	W1027 19:16:53.635562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:16:55.638267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:16:55.643372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:16:57.647267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:16:57.652800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:16:59.656324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:16:59.664811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:01.669871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:01.677042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:03.680838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:03.685975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:05.689400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:05.700120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:07.704308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:07.709553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:09.712934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:09.722517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:11.725848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:11.736965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:13.741266       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:13.746455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:15.749044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:15.755801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:17.760220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:17.765965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-074768 -n functional-074768
helpers_test.go:269: (dbg) Run:  kubectl --context functional-074768 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-connect-7d85dfc575-ppg9q mysql-5bb876957f-zrxgm sp-pod dashboard-metrics-scraper-77bf4d6c4c-vfwcs kubernetes-dashboard-855c9754f9-7xqm9
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-074768 describe pod busybox-mount hello-node-connect-7d85dfc575-ppg9q mysql-5bb876957f-zrxgm sp-pod dashboard-metrics-scraper-77bf4d6c4c-vfwcs kubernetes-dashboard-855c9754f9-7xqm9
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-074768 describe pod busybox-mount hello-node-connect-7d85dfc575-ppg9q mysql-5bb876957f-zrxgm sp-pod dashboard-metrics-scraper-77bf4d6c4c-vfwcs kubernetes-dashboard-855c9754f9-7xqm9: exit status 1 (98.768617ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-074768/192.168.39.117
	Start Time:       Mon, 27 Oct 2025 19:12:16 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://b1324210b99e8b9f9c9065aa41dc3791c8db409d8aa5ab8f236b2399ccb7bdb7
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 27 Oct 2025 19:12:49 +0000
	      Finished:     Mon, 27 Oct 2025 19:12:49 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zc6qh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-zc6qh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m2s   default-scheduler  Successfully assigned default/busybox-mount to functional-074768
	  Normal  Pulling    5m1s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     4m29s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.582s (32.731s including waiting). Image size: 4631262 bytes.
	  Normal  Created    4m29s  kubelet            Created container: mount-munger
	  Normal  Started    4m29s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-connect-7d85dfc575-ppg9q
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-074768/192.168.39.117
	Start Time:       Mon, 27 Oct 2025 19:12:16 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t7d4k (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-t7d4k:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  5m2s                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-ppg9q to functional-074768
	  Warning  Failed     4m31s                 kubelet            Failed to pull image "kicbase/echo-server": copying system image from manifest list: determining manifest MIME type for docker://kicbase/echo-server:latest: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     112s (x2 over 4m31s)  kubelet            Error: ErrImagePull
	  Warning  Failed     112s                  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    98s (x2 over 4m31s)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     98s (x2 over 4m31s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    84s (x3 over 5m2s)    kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-zrxgm
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-074768/192.168.39.117
	Start Time:       Mon, 27 Oct 2025 19:12:55 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:           10.244.0.12
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rxvwx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rxvwx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  4m23s                 default-scheduler  Successfully assigned default/mysql-5bb876957f-zrxgm to functional-074768
	  Warning  Failed     2m22s                 kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m22s                 kubelet            Error: ErrImagePull
	  Normal   BackOff    2m21s                 kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m21s                 kubelet            Error: ImagePullBackOff
	  Normal   Pulling    2m8s (x2 over 4m22s)  kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-074768/192.168.39.117
	Start Time:       Mon, 27 Oct 2025 19:12:27 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gmtcg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-gmtcg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  4m51s                  default-scheduler  Successfully assigned default/sp-pod to functional-074768
	  Normal   Pulling    2m44s (x2 over 4m50s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     22s (x2 over 2m58s)    kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     22s (x2 over 2m58s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    8s (x2 over 2m57s)     kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     8s (x2 over 2m57s)     kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-vfwcs" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-7xqm9" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-074768 describe pod busybox-mount hello-node-connect-7d85dfc575-ppg9q mysql-5bb876957f-zrxgm sp-pod dashboard-metrics-scraper-77bf4d6c4c-vfwcs kubernetes-dashboard-855c9754f9-7xqm9: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-074768 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-074768 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-ppg9q" [66fec0d9-6763-4ac7-be30-631c20dcc46e] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-074768 -n functional-074768
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-27 19:22:16.547533852 +0000 UTC m=+1564.472075947
functional_test.go:1645: (dbg) Run:  kubectl --context functional-074768 describe po hello-node-connect-7d85dfc575-ppg9q -n default
functional_test.go:1645: (dbg) kubectl --context functional-074768 describe po hello-node-connect-7d85dfc575-ppg9q -n default:
Name:             hello-node-connect-7d85dfc575-ppg9q
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-074768/192.168.39.117
Start Time:       Mon, 27 Oct 2025 19:12:16 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t7d4k (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-t7d4k:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-ppg9q to functional-074768
Warning  Failed     9m29s                kubelet            Failed to pull image "kicbase/echo-server": copying system image from manifest list: determining manifest MIME type for docker://kicbase/echo-server:latest: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    3m13s (x4 over 10m)  kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     51s (x4 over 9m29s)  kubelet            Error: ErrImagePull
Warning  Failed     51s (x3 over 6m50s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    2s (x9 over 9m29s)   kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     2s (x9 over 9m29s)   kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-074768 logs hello-node-connect-7d85dfc575-ppg9q -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-074768 logs hello-node-connect-7d85dfc575-ppg9q -n default: exit status 1 (86.747378ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-ppg9q" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-074768 logs hello-node-connect-7d85dfc575-ppg9q -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-074768 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-ppg9q
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-074768/192.168.39.117
Start Time:       Mon, 27 Oct 2025 19:12:16 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t7d4k (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-t7d4k:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-ppg9q to functional-074768
Warning  Failed     9m29s                kubelet            Failed to pull image "kicbase/echo-server": copying system image from manifest list: determining manifest MIME type for docker://kicbase/echo-server:latest: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    3m13s (x4 over 10m)  kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     51s (x4 over 9m29s)  kubelet            Error: ErrImagePull
Warning  Failed     51s (x3 over 6m50s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    2s (x9 over 9m29s)   kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     2s (x9 over 9m29s)   kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-074768 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-074768 logs -l app=hello-node-connect: exit status 1 (64.909987ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-ppg9q" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-074768 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-074768 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.108.218.230
IPs:                      10.108.218.230
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31575/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-074768 -n functional-074768
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-074768 logs -n 25: (1.526066481s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons         │ functional-074768 addons list                                                                                              │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │ 27 Oct 25 19:12 UTC │
	│ addons         │ functional-074768 addons list -o json                                                                                      │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │ 27 Oct 25 19:12 UTC │
	│ ssh            │ functional-074768 ssh sudo cat /etc/ssl/certs/62705.pem                                                                    │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ ssh            │ functional-074768 ssh sudo cat /usr/share/ca-certificates/62705.pem                                                        │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ ssh            │ functional-074768 ssh sudo cat /etc/ssl/certs/51391683.0                                                                   │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ ssh            │ functional-074768 ssh sudo cat /etc/ssl/certs/627052.pem                                                                   │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ ssh            │ functional-074768 ssh sudo cat /usr/share/ca-certificates/627052.pem                                                       │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ ssh            │ functional-074768 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                   │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ ssh            │ functional-074768 ssh sudo cat /etc/test/nested/copy/62705/hosts                                                           │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ cp             │ functional-074768 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ ssh            │ functional-074768 ssh -n functional-074768 sudo cat /home/docker/cp-test.txt                                               │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ cp             │ functional-074768 cp functional-074768:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3501193272/001/cp-test.txt │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ ssh            │ functional-074768 ssh -n functional-074768 sudo cat /home/docker/cp-test.txt                                               │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ cp             │ functional-074768 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ ssh            │ functional-074768 ssh -n functional-074768 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ update-context │ functional-074768 update-context --alsologtostderr -v=2                                                                    │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:18 UTC │ 27 Oct 25 19:18 UTC │
	│ update-context │ functional-074768 update-context --alsologtostderr -v=2                                                                    │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:18 UTC │ 27 Oct 25 19:18 UTC │
	│ update-context │ functional-074768 update-context --alsologtostderr -v=2                                                                    │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:18 UTC │ 27 Oct 25 19:18 UTC │
	│ image          │ functional-074768 image ls --format short --alsologtostderr                                                                │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:18 UTC │ 27 Oct 25 19:18 UTC │
	│ image          │ functional-074768 image ls --format yaml --alsologtostderr                                                                 │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:18 UTC │ 27 Oct 25 19:18 UTC │
	│ ssh            │ functional-074768 ssh pgrep buildkitd                                                                                      │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:18 UTC │                     │
	│ image          │ functional-074768 image build -t localhost/my-image:functional-074768 testdata/build --alsologtostderr                     │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:18 UTC │ 27 Oct 25 19:18 UTC │
	│ image          │ functional-074768 image ls                                                                                                 │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:18 UTC │ 27 Oct 25 19:18 UTC │
	│ image          │ functional-074768 image ls --format json --alsologtostderr                                                                 │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:18 UTC │ 27 Oct 25 19:18 UTC │
	│ image          │ functional-074768 image ls --format table --alsologtostderr                                                                │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:18 UTC │ 27 Oct 25 19:18 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 19:12:15
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 19:12:15.978537   69010 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:12:15.978805   69010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:12:15.978815   69010 out.go:374] Setting ErrFile to fd 2...
	I1027 19:12:15.978819   69010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:12:15.979054   69010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-58821/.minikube/bin
	I1027 19:12:15.979492   69010 out.go:368] Setting JSON to false
	I1027 19:12:15.980424   69010 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6886,"bootTime":1761585450,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 19:12:15.980516   69010 start.go:141] virtualization: kvm guest
	I1027 19:12:15.985372   69010 out.go:179] * [functional-074768] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 19:12:15.986974   69010 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:12:15.986983   69010 notify.go:220] Checking for updates...
	I1027 19:12:15.988280   69010 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:12:15.989521   69010 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-58821/kubeconfig
	I1027 19:12:15.990743   69010 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-58821/.minikube
	I1027 19:12:15.991901   69010 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 19:12:15.993094   69010 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:12:15.994971   69010 config.go:182] Loaded profile config "functional-074768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:12:15.995574   69010 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:12:16.035343   69010 out.go:179] * Using the kvm2 driver based on existing profile
	I1027 19:12:16.036772   69010 start.go:305] selected driver: kvm2
	I1027 19:12:16.036792   69010 start.go:925] validating driver "kvm2" against &{Name:functional-074768 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-074768 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:12:16.036933   69010 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:12:16.038400   69010 cni.go:84] Creating CNI manager for ""
	I1027 19:12:16.038475   69010 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 19:12:16.038550   69010 start.go:349] cluster config:
	{Name:functional-074768 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-074768 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:12:16.040073   69010 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 27 19:22:17 functional-074768 crio[5469]: time="2025-10-27 19:22:17.582236329Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761592937582211627,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201237,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=15e88081-9511-4f66-a311-6da471fa37df name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:22:17 functional-074768 crio[5469]: time="2025-10-27 19:22:17.583304684Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5832bec8-c5d3-47e0-9bd9-585159743ce8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:22:17 functional-074768 crio[5469]: time="2025-10-27 19:22:17.583358558Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5832bec8-c5d3-47e0-9bd9-585159743ce8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:22:17 functional-074768 crio[5469]: time="2025-10-27 19:22:17.583736682Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1324210b99e8b9f9c9065aa41dc3791c8db409d8aa5ab8f236b2399ccb7bdb7,PodSandboxId:542a950a6c94060e92400d067bc90eb44972884fe88539e14c8e6369202899ab,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761592369891065713,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c84388b6-2d7c-40a2-b560-fd225b55349a,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e03d8878eeba1200689ad3ffc75889ca6da8adb3ca6422abe3c15b802418bfe,PodSandboxId:4f2d4092ef8ce6f2ecebbd65772d6c0e4c75becabf6c5c71e3a7ab8a6578edd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761592310407165550,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp2k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538c55aa-9e90-4bb3-83b7-f84cce86edca,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b0eb17be9e97894e47220b4bbb2efa859e1155cbe7cff15e03d68b41c28284,PodSandboxId:f6ffd9e1d08daec9d8774fc7454fb6003f75ac9cc21e418870e28db77f9bf746,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761592310403081749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ca78600-3d29-4edc-9a1c-572cf646e83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:610b55ad8d57b43b4f9f14aca129a7c3f93464db3fb0040f0869e97828d3942e,PodSandboxId:ea9b6a12a7c9a66ed0d575bd7da37ec71c0a5c995cb086a4d164a6214f32b7a4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761592307034057590,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c94e58bf276ae150ba3be616b5d9315d,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba435049ad20a4af85b2f2d201801a5da9edf37f9f61b27284ccb0c798fbbaf,PodSandboxId:3da3325cd282ffb88aa1eed45cb1758e8dd5886923e0537475bf8b2062a708f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761592306825227460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d75616c8b2e6db9ba925e56dac14f36d,},Annotations:map[string]string{io.kubern
etes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad24ef975b266d69bc17f4c4acc398bd352926f4e8a86e50c84e2a2bcd7f3c5,PodSandboxId:62e2964894758c1aaed419fd373fdedd326f471efed3168345a8f2e5c1a18256,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761592306781617058,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074768,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 84d9d1d3cdf44b76588eee6ed2c2ed23,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c257ffa5d58fa9fcc17ef7974358549bf09938dff7426d6a957c613b8071e622,PodSandboxId:44c1dc33c037b1964e803b03fbe87ef48ced6c62d21ce6bbe99abb10c7d606bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761592306763270982,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1419f6f2dbf7cdc64e36c7697d572358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b855533e3a4aa01919810611cc1998e2734d17f27ed9d4381f184b8ae2c789e,PodSandboxId:c0d55bbc8d1fa75dc2bac99e9e6c5070f6a924127df831c2e344b53b4d4dcf73,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,Crea
tedAt:1761592304383828785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lv8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a05ec0c93cfd736d12c8313747186f27db2021dc9a371d8d3a4438f992549069,PodSandboxId:4f2d4092ef8ce6f2ecebbd65772d6c0e4c7
5becabf6c5c71e3a7ab8a6578edd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761592303924560683,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp2k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538c55aa-9e90-4bb3-83b7-f84cce86edca,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba777ad788f45b33f81f28d7aecebc6c59953bb838ed7f23c33832f2534a5179,PodSandboxId:f27f5919b6af890342c38a8769648e9c4a65ce64cb55dea1cf0342776751bca4,Meta
data:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761592275224568234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ca78600-3d29-4edc-9a1c-572cf646e83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6a57ec258cde7cb5f391bf83a85ba008e698c156047225611c8eab6c6babc0,PodSandboxId:c1e8123b0e1087e5aab8a8a7c7bcef0af0672ff02d81366f3f783974115ad5e4,Metadata:&Contain
erMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761592271488084282,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1419f6f2dbf7cdc64e36c7697d572358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa2d3b5fc751c052fe689b7cd2e4fd4d601c169b76aa924543645ff13d971195,PodSandboxId:af345
d07331bc89836a28c1110a92060b79b70e6fc12a33f95e0f664b95113e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761592271427253610,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84d9d1d3cdf44b76588eee6ed2c2ed23,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:60dc886cc82b05339032a646f43bb214cf04fc6adf15adc4e8500e3ad3d16a78,PodSandboxId:325376ebe73d19cdba4b91a0c3b50e525bc1fe4cedb8425acabd64180bb74da3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761592271436590190,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d75616c8b2e6db9ba925e56dac14f36d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b88ac5c8b637643251fc39d80e23070b4065a79ea8af3e53205465bf22ed227c,PodSandboxId:a4731d6d18ca9bedf0f80e8d150c0dd35d01d2a54d12375e767efe9ab4d14ac1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761592267049795999,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lv8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5832bec8-c5d3-47e0-9bd9-585159743ce8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:22:17 functional-074768 crio[5469]: time="2025-10-27 19:22:17.631432345Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b1a62f32-53c4-41a0-8cab-e7aca93ba5ad name=/runtime.v1.RuntimeService/Version
	Oct 27 19:22:17 functional-074768 crio[5469]: time="2025-10-27 19:22:17.631737704Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b1a62f32-53c4-41a0-8cab-e7aca93ba5ad name=/runtime.v1.RuntimeService/Version
	Oct 27 19:22:17 functional-074768 crio[5469]: time="2025-10-27 19:22:17.634016437Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9d707bb1-c80c-499f-ad7e-a63e41c84508 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:22:17 functional-074768 crio[5469]: time="2025-10-27 19:22:17.634821281Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761592937634795672,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201237,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9d707bb1-c80c-499f-ad7e-a63e41c84508 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:22:17 functional-074768 crio[5469]: time="2025-10-27 19:22:17.635519267Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9b3dd77d-cd23-4fe9-9d5b-333f6e4f2426 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:22:17 functional-074768 crio[5469]: time="2025-10-27 19:22:17.635593887Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9b3dd77d-cd23-4fe9-9d5b-333f6e4f2426 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:22:17 functional-074768 crio[5469]: time="2025-10-27 19:22:17.635965324Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1324210b99e8b9f9c9065aa41dc3791c8db409d8aa5ab8f236b2399ccb7bdb7,PodSandboxId:542a950a6c94060e92400d067bc90eb44972884fe88539e14c8e6369202899ab,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761592369891065713,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c84388b6-2d7c-40a2-b560-fd225b55349a,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e03d8878eeba1200689ad3ffc75889ca6da8adb3ca6422abe3c15b802418bfe,PodSandboxId:4f2d4092ef8ce6f2ecebbd65772d6c0e4c75becabf6c5c71e3a7ab8a6578edd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761592310407165550,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp2k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538c55aa-9e90-4bb3-83b7-f84cce86edca,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b0eb17be9e97894e47220b4bbb2efa859e1155cbe7cff15e03d68b41c28284,PodSandboxId:f6ffd9e1d08daec9d8774fc7454fb6003f75ac9cc21e418870e28db77f9bf746,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761592310403081749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ca78600-3d29-4edc-9a1c-572cf646e83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:610b55ad8d57b43b4f9f14aca129a7c3f93464db3fb0040f0869e97828d3942e,PodSandboxId:ea9b6a12a7c9a66ed0d575bd7da37ec71c0a5c995cb086a4d164a6214f32b7a4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761592307034057590,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c94e58bf276ae150ba3be616b5d9315d,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba435049ad20a4af85b2f2d201801a5da9edf37f9f61b27284ccb0c798fbbaf,PodSandboxId:3da3325cd282ffb88aa1eed45cb1758e8dd5886923e0537475bf8b2062a708f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761592306825227460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d75616c8b2e6db9ba925e56dac14f36d,},Annotations:map[string]string{io.kubern
etes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad24ef975b266d69bc17f4c4acc398bd352926f4e8a86e50c84e2a2bcd7f3c5,PodSandboxId:62e2964894758c1aaed419fd373fdedd326f471efed3168345a8f2e5c1a18256,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761592306781617058,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074768,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 84d9d1d3cdf44b76588eee6ed2c2ed23,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c257ffa5d58fa9fcc17ef7974358549bf09938dff7426d6a957c613b8071e622,PodSandboxId:44c1dc33c037b1964e803b03fbe87ef48ced6c62d21ce6bbe99abb10c7d606bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761592306763270982,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1419f6f2dbf7cdc64e36c7697d572358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b855533e3a4aa01919810611cc1998e2734d17f27ed9d4381f184b8ae2c789e,PodSandboxId:c0d55bbc8d1fa75dc2bac99e9e6c5070f6a924127df831c2e344b53b4d4dcf73,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,Crea
tedAt:1761592304383828785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lv8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a05ec0c93cfd736d12c8313747186f27db2021dc9a371d8d3a4438f992549069,PodSandboxId:4f2d4092ef8ce6f2ecebbd65772d6c0e4c7
5becabf6c5c71e3a7ab8a6578edd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761592303924560683,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp2k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538c55aa-9e90-4bb3-83b7-f84cce86edca,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba777ad788f45b33f81f28d7aecebc6c59953bb838ed7f23c33832f2534a5179,PodSandboxId:f27f5919b6af890342c38a8769648e9c4a65ce64cb55dea1cf0342776751bca4,Meta
data:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761592275224568234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ca78600-3d29-4edc-9a1c-572cf646e83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6a57ec258cde7cb5f391bf83a85ba008e698c156047225611c8eab6c6babc0,PodSandboxId:c1e8123b0e1087e5aab8a8a7c7bcef0af0672ff02d81366f3f783974115ad5e4,Metadata:&Contain
erMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761592271488084282,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1419f6f2dbf7cdc64e36c7697d572358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa2d3b5fc751c052fe689b7cd2e4fd4d601c169b76aa924543645ff13d971195,PodSandboxId:af345
d07331bc89836a28c1110a92060b79b70e6fc12a33f95e0f664b95113e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761592271427253610,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84d9d1d3cdf44b76588eee6ed2c2ed23,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:60dc886cc82b05339032a646f43bb214cf04fc6adf15adc4e8500e3ad3d16a78,PodSandboxId:325376ebe73d19cdba4b91a0c3b50e525bc1fe4cedb8425acabd64180bb74da3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761592271436590190,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d75616c8b2e6db9ba925e56dac14f36d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b88ac5c8b637643251fc39d80e23070b4065a79ea8af3e53205465bf22ed227c,PodSandboxId:a4731d6d18ca9bedf0f80e8d150c0dd35d01d2a54d12375e767efe9ab4d14ac1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761592267049795999,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lv8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9b3dd77d-cd23-4fe9-9d5b-333f6e4f2426 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:22:17 functional-074768 crio[5469]: time="2025-10-27 19:22:17.680347213Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5b0f658f-87ca-410e-86c6-f5c04dc95c5a name=/runtime.v1.RuntimeService/Version
	Oct 27 19:22:17 functional-074768 crio[5469]: time="2025-10-27 19:22:17.680478777Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5b0f658f-87ca-410e-86c6-f5c04dc95c5a name=/runtime.v1.RuntimeService/Version
	Oct 27 19:22:17 functional-074768 crio[5469]: time="2025-10-27 19:22:17.681884630Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a6251017-efd3-44f6-a5b6-3146cf5a6f11 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:22:17 functional-074768 crio[5469]: time="2025-10-27 19:22:17.683461787Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761592937683427105,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201237,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a6251017-efd3-44f6-a5b6-3146cf5a6f11 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:22:17 functional-074768 crio[5469]: time="2025-10-27 19:22:17.685138203Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=13c93b78-ec48-4410-9a4a-e662a54e6fa6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:22:17 functional-074768 crio[5469]: time="2025-10-27 19:22:17.685262564Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=13c93b78-ec48-4410-9a4a-e662a54e6fa6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:22:17 functional-074768 crio[5469]: time="2025-10-27 19:22:17.685784217Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1324210b99e8b9f9c9065aa41dc3791c8db409d8aa5ab8f236b2399ccb7bdb7,PodSandboxId:542a950a6c94060e92400d067bc90eb44972884fe88539e14c8e6369202899ab,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761592369891065713,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c84388b6-2d7c-40a2-b560-fd225b55349a,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e03d8878eeba1200689ad3ffc75889ca6da8adb3ca6422abe3c15b802418bfe,PodSandboxId:4f2d4092ef8ce6f2ecebbd65772d6c0e4c75becabf6c5c71e3a7ab8a6578edd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761592310407165550,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp2k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538c55aa-9e90-4bb3-83b7-f84cce86edca,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b0eb17be9e97894e47220b4bbb2efa859e1155cbe7cff15e03d68b41c28284,PodSandboxId:f6ffd9e1d08daec9d8774fc7454fb6003f75ac9cc21e418870e28db77f9bf746,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761592310403081749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ca78600-3d29-4edc-9a1c-572cf646e83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:610b55ad8d57b43b4f9f14aca129a7c3f93464db3fb0040f0869e97828d3942e,PodSandboxId:ea9b6a12a7c9a66ed0d575bd7da37ec71c0a5c995cb086a4d164a6214f32b7a4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761592307034057590,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c94e58bf276ae150ba3be616b5d9315d,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba435049ad20a4af85b2f2d201801a5da9edf37f9f61b27284ccb0c798fbbaf,PodSandboxId:3da3325cd282ffb88aa1eed45cb1758e8dd5886923e0537475bf8b2062a708f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761592306825227460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d75616c8b2e6db9ba925e56dac14f36d,},Annotations:map[string]string{io.kubern
etes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad24ef975b266d69bc17f4c4acc398bd352926f4e8a86e50c84e2a2bcd7f3c5,PodSandboxId:62e2964894758c1aaed419fd373fdedd326f471efed3168345a8f2e5c1a18256,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761592306781617058,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074768,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 84d9d1d3cdf44b76588eee6ed2c2ed23,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c257ffa5d58fa9fcc17ef7974358549bf09938dff7426d6a957c613b8071e622,PodSandboxId:44c1dc33c037b1964e803b03fbe87ef48ced6c62d21ce6bbe99abb10c7d606bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761592306763270982,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1419f6f2dbf7cdc64e36c7697d572358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b855533e3a4aa01919810611cc1998e2734d17f27ed9d4381f184b8ae2c789e,PodSandboxId:c0d55bbc8d1fa75dc2bac99e9e6c5070f6a924127df831c2e344b53b4d4dcf73,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,Crea
tedAt:1761592304383828785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lv8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a05ec0c93cfd736d12c8313747186f27db2021dc9a371d8d3a4438f992549069,PodSandboxId:4f2d4092ef8ce6f2ecebbd65772d6c0e4c7
5becabf6c5c71e3a7ab8a6578edd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761592303924560683,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp2k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538c55aa-9e90-4bb3-83b7-f84cce86edca,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba777ad788f45b33f81f28d7aecebc6c59953bb838ed7f23c33832f2534a5179,PodSandboxId:f27f5919b6af890342c38a8769648e9c4a65ce64cb55dea1cf0342776751bca4,Meta
data:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761592275224568234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ca78600-3d29-4edc-9a1c-572cf646e83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6a57ec258cde7cb5f391bf83a85ba008e698c156047225611c8eab6c6babc0,PodSandboxId:c1e8123b0e1087e5aab8a8a7c7bcef0af0672ff02d81366f3f783974115ad5e4,Metadata:&Contain
erMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761592271488084282,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1419f6f2dbf7cdc64e36c7697d572358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa2d3b5fc751c052fe689b7cd2e4fd4d601c169b76aa924543645ff13d971195,PodSandboxId:af345
d07331bc89836a28c1110a92060b79b70e6fc12a33f95e0f664b95113e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761592271427253610,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84d9d1d3cdf44b76588eee6ed2c2ed23,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:60dc886cc82b05339032a646f43bb214cf04fc6adf15adc4e8500e3ad3d16a78,PodSandboxId:325376ebe73d19cdba4b91a0c3b50e525bc1fe4cedb8425acabd64180bb74da3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761592271436590190,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d75616c8b2e6db9ba925e56dac14f36d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b88ac5c8b637643251fc39d80e23070b4065a79ea8af3e53205465bf22ed227c,PodSandboxId:a4731d6d18ca9bedf0f80e8d150c0dd35d01d2a54d12375e767efe9ab4d14ac1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761592267049795999,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lv8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=13c93b78-ec48-4410-9a4a-e662a54e6fa6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:22:17 functional-074768 crio[5469]: time="2025-10-27 19:22:17.736103342Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9cd42896-d5c7-43dc-a304-ed9b2b5d8365 name=/runtime.v1.RuntimeService/Version
	Oct 27 19:22:17 functional-074768 crio[5469]: time="2025-10-27 19:22:17.736195602Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9cd42896-d5c7-43dc-a304-ed9b2b5d8365 name=/runtime.v1.RuntimeService/Version
	Oct 27 19:22:17 functional-074768 crio[5469]: time="2025-10-27 19:22:17.737638223Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=25432018-4bb2-43c1-8e47-c7f74b2919f1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:22:17 functional-074768 crio[5469]: time="2025-10-27 19:22:17.738984338Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761592937738958871,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201237,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=25432018-4bb2-43c1-8e47-c7f74b2919f1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:22:17 functional-074768 crio[5469]: time="2025-10-27 19:22:17.739522174Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5681cdea-0bbe-4e23-b4cb-6738bb32faf7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:22:17 functional-074768 crio[5469]: time="2025-10-27 19:22:17.739652480Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5681cdea-0bbe-4e23-b4cb-6738bb32faf7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:22:17 functional-074768 crio[5469]: time="2025-10-27 19:22:17.739977058Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1324210b99e8b9f9c9065aa41dc3791c8db409d8aa5ab8f236b2399ccb7bdb7,PodSandboxId:542a950a6c94060e92400d067bc90eb44972884fe88539e14c8e6369202899ab,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761592369891065713,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c84388b6-2d7c-40a2-b560-fd225b55349a,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e03d8878eeba1200689ad3ffc75889ca6da8adb3ca6422abe3c15b802418bfe,PodSandboxId:4f2d4092ef8ce6f2ecebbd65772d6c0e4c75becabf6c5c71e3a7ab8a6578edd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761592310407165550,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp2k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538c55aa-9e90-4bb3-83b7-f84cce86edca,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b0eb17be9e97894e47220b4bbb2efa859e1155cbe7cff15e03d68b41c28284,PodSandboxId:f6ffd9e1d08daec9d8774fc7454fb6003f75ac9cc21e418870e28db77f9bf746,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761592310403081749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ca78600-3d29-4edc-9a1c-572cf646e83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:610b55ad8d57b43b4f9f14aca129a7c3f93464db3fb0040f0869e97828d3942e,PodSandboxId:ea9b6a12a7c9a66ed0d575bd7da37ec71c0a5c995cb086a4d164a6214f32b7a4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761592307034057590,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c94e58bf276ae150ba3be616b5d9315d,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba435049ad20a4af85b2f2d201801a5da9edf37f9f61b27284ccb0c798fbbaf,PodSandboxId:3da3325cd282ffb88aa1eed45cb1758e8dd5886923e0537475bf8b2062a708f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761592306825227460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d75616c8b2e6db9ba925e56dac14f36d,},Annotations:map[string]string{io.kubern
etes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad24ef975b266d69bc17f4c4acc398bd352926f4e8a86e50c84e2a2bcd7f3c5,PodSandboxId:62e2964894758c1aaed419fd373fdedd326f471efed3168345a8f2e5c1a18256,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761592306781617058,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074768,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 84d9d1d3cdf44b76588eee6ed2c2ed23,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c257ffa5d58fa9fcc17ef7974358549bf09938dff7426d6a957c613b8071e622,PodSandboxId:44c1dc33c037b1964e803b03fbe87ef48ced6c62d21ce6bbe99abb10c7d606bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761592306763270982,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1419f6f2dbf7cdc64e36c7697d572358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b855533e3a4aa01919810611cc1998e2734d17f27ed9d4381f184b8ae2c789e,PodSandboxId:c0d55bbc8d1fa75dc2bac99e9e6c5070f6a924127df831c2e344b53b4d4dcf73,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,Crea
tedAt:1761592304383828785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lv8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a05ec0c93cfd736d12c8313747186f27db2021dc9a371d8d3a4438f992549069,PodSandboxId:4f2d4092ef8ce6f2ecebbd65772d6c0e4c7
5becabf6c5c71e3a7ab8a6578edd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761592303924560683,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp2k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538c55aa-9e90-4bb3-83b7-f84cce86edca,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba777ad788f45b33f81f28d7aecebc6c59953bb838ed7f23c33832f2534a5179,PodSandboxId:f27f5919b6af890342c38a8769648e9c4a65ce64cb55dea1cf0342776751bca4,Meta
data:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761592275224568234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ca78600-3d29-4edc-9a1c-572cf646e83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6a57ec258cde7cb5f391bf83a85ba008e698c156047225611c8eab6c6babc0,PodSandboxId:c1e8123b0e1087e5aab8a8a7c7bcef0af0672ff02d81366f3f783974115ad5e4,Metadata:&Contain
erMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761592271488084282,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1419f6f2dbf7cdc64e36c7697d572358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa2d3b5fc751c052fe689b7cd2e4fd4d601c169b76aa924543645ff13d971195,PodSandboxId:af345
d07331bc89836a28c1110a92060b79b70e6fc12a33f95e0f664b95113e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761592271427253610,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84d9d1d3cdf44b76588eee6ed2c2ed23,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:60dc886cc82b05339032a646f43bb214cf04fc6adf15adc4e8500e3ad3d16a78,PodSandboxId:325376ebe73d19cdba4b91a0c3b50e525bc1fe4cedb8425acabd64180bb74da3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761592271436590190,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d75616c8b2e6db9ba925e56dac14f36d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b88ac5c8b637643251fc39d80e23070b4065a79ea8af3e53205465bf22ed227c,PodSandboxId:a4731d6d18ca9bedf0f80e8d150c0dd35d01d2a54d12375e767efe9ab4d14ac1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761592267049795999,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lv8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5681cdea-0bbe-4e23-b4cb-6738bb32faf7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b1324210b99e8       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   9 minutes ago       Exited              mount-munger              0                   542a950a6c940       busybox-mount
	5e03d8878eeba       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      10 minutes ago      Running             kube-proxy                3                   4f2d4092ef8ce       kube-proxy-lp2k8
	d7b0eb17be9e9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Running             storage-provisioner       3                   f6ffd9e1d08da       storage-provisioner
	610b55ad8d57b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      10 minutes ago      Running             kube-apiserver            0                   ea9b6a12a7c9a       kube-apiserver-functional-074768
	1ba435049ad20       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      10 minutes ago      Running             kube-controller-manager   3                   3da3325cd282f       kube-controller-manager-functional-074768
	3ad24ef975b26       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      10 minutes ago      Running             kube-scheduler            3                   62e2964894758       kube-scheduler-functional-074768
	c257ffa5d58fa       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      10 minutes ago      Running             etcd                      3                   44c1dc33c037b       etcd-functional-074768
	8b855533e3a4a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      10 minutes ago      Running             coredns                   2                   c0d55bbc8d1fa       coredns-66bc5c9577-2lv8d
	a05ec0c93cfd7       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      10 minutes ago      Exited              kube-proxy                2                   4f2d4092ef8ce       kube-proxy-lp2k8
	ba777ad788f45       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 minutes ago      Exited              storage-provisioner       2                   f27f5919b6af8       storage-provisioner
	8f6a57ec258cd       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      11 minutes ago      Exited              etcd                      2                   c1e8123b0e108       etcd-functional-074768
	60dc886cc82b0       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      11 minutes ago      Exited              kube-controller-manager   2                   325376ebe73d1       kube-controller-manager-functional-074768
	fa2d3b5fc751c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      11 minutes ago      Exited              kube-scheduler            2                   af345d07331bc       kube-scheduler-functional-074768
	b88ac5c8b6376       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 minutes ago      Exited              coredns                   1                   a4731d6d18ca9       coredns-66bc5c9577-2lv8d
	
	
	==> coredns [8b855533e3a4aa01919810611cc1998e2734d17f27ed9d4381f184b8ae2c789e] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45293 - 5246 "HINFO IN 454679650042713632.2272985414247109723. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.039332227s
	
	
	==> coredns [b88ac5c8b637643251fc39d80e23070b4065a79ea8af3e53205465bf22ed227c] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53381 - 18232 "HINFO IN 6855518255260926845.7182282404724631670. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.025903505s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-074768
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-074768
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=functional-074768
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T19_10_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 19:10:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-074768
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 19:22:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 19:18:48 +0000   Mon, 27 Oct 2025 19:10:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 19:18:48 +0000   Mon, 27 Oct 2025 19:10:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 19:18:48 +0000   Mon, 27 Oct 2025 19:10:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 19:18:48 +0000   Mon, 27 Oct 2025 19:10:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.117
	  Hostname:    functional-074768
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 d60c87697e45438394c451d2f7a36472
	  System UUID:                d60c8769-7e45-4383-94c4-51d2f7a36472
	  Boot ID:                    59f8f872-6752-425e-9853-f7970fb836c8
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-kbc8s                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  default                     hello-node-connect-7d85dfc575-ppg9q           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-zrxgm                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    9m23s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m51s
	  kube-system                 coredns-66bc5c9577-2lv8d                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-functional-074768                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-074768              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-074768     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-lp2k8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-074768              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-vfwcs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-7xqm9         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-074768 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-074768 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node functional-074768 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                12m                kubelet          Node functional-074768 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-074768 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-074768 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-074768 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           12m                node-controller  Node functional-074768 event: Registered Node functional-074768 in Controller
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-074768 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-074768 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-074768 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           11m                node-controller  Node functional-074768 event: Registered Node functional-074768 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-074768 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-074768 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-074768 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node functional-074768 event: Registered Node functional-074768 in Controller
	
	
	==> dmesg <==
	[  +0.000050] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.009365] (rpcbind)[117]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.178192] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000026] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000004] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.089795] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.107366] kauditd_printk_skb: 130 callbacks suppressed
	[Oct27 19:10] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.009137] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.877352] kauditd_printk_skb: 249 callbacks suppressed
	[ +30.932062] kauditd_printk_skb: 38 callbacks suppressed
	[Oct27 19:11] kauditd_printk_skb: 350 callbacks suppressed
	[  +4.462417] kauditd_printk_skb: 81 callbacks suppressed
	[  +6.561811] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.109963] kauditd_printk_skb: 24 callbacks suppressed
	[  +0.392463] kauditd_printk_skb: 303 callbacks suppressed
	[  +1.946868] kauditd_printk_skb: 108 callbacks suppressed
	[Oct27 19:12] kauditd_printk_skb: 8 callbacks suppressed
	[  +1.008563] kauditd_printk_skb: 91 callbacks suppressed
	[  +0.000147] kauditd_printk_skb: 152 callbacks suppressed
	[ +19.538164] kauditd_printk_skb: 26 callbacks suppressed
	[  +3.845026] kauditd_printk_skb: 31 callbacks suppressed
	[Oct27 19:13] kauditd_printk_skb: 38 callbacks suppressed
	[Oct27 19:18] crun[9565]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +2.727233] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [8f6a57ec258cde7cb5f391bf83a85ba008e698c156047225611c8eab6c6babc0] <==
	{"level":"warn","ts":"2025-10-27T19:11:13.796751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:13.803213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:13.820989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:13.827139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:13.850127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:13.864610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:13.965616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51752","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-27T19:11:34.886131Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-27T19:11:34.886250Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-074768","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.117:2380"],"advertise-client-urls":["https://192.168.39.117:2379"]}
	{"level":"error","ts":"2025-10-27T19:11:34.886308Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-27T19:11:34.969542Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-27T19:11:34.969623Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T19:11:34.969765Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d85ef093c7464643","current-leader-member-id":"d85ef093c7464643"}
	{"level":"info","ts":"2025-10-27T19:11:34.969860Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-27T19:11:34.969869Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-27T19:11:34.969972Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-27T19:11:34.970059Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-27T19:11:34.970068Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-27T19:11:34.970117Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.117:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-27T19:11:34.970124Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.117:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-27T19:11:34.970130Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.117:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T19:11:34.974242Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.117:2380"}
	{"level":"error","ts":"2025-10-27T19:11:34.974325Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.117:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T19:11:34.974349Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.117:2380"}
	{"level":"info","ts":"2025-10-27T19:11:34.974355Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-074768","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.117:2380"],"advertise-client-urls":["https://192.168.39.117:2379"]}
	
	
	==> etcd [c257ffa5d58fa9fcc17ef7974358549bf09938dff7426d6a957c613b8071e622] <==
	{"level":"warn","ts":"2025-10-27T19:11:48.731621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.774630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.777064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.803070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.831051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.844794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.869011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.906600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.925550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.942908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.960546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.988468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.001773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.013937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.028896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.043608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.055856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.067861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.085770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.088921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.104882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.204360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60912","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-27T19:21:47.882964Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1106}
	{"level":"info","ts":"2025-10-27T19:21:47.906946Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1106,"took":"23.35299ms","hash":1242694967,"current-db-size-bytes":3502080,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":1609728,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-10-27T19:21:47.907008Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1242694967,"revision":1106,"compact-revision":-1}
	
	
	==> kernel <==
	 19:22:18 up 12 min,  0 users,  load average: 0.21, 0.40, 0.31
	Linux functional-074768 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [610b55ad8d57b43b4f9f14aca129a7c3f93464db3fb0040f0869e97828d3942e] <==
	I1027 19:11:49.997079       1 cache.go:39] Caches are synced for autoregister controller
	I1027 19:11:49.997909       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 19:11:50.006465       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 19:11:50.020163       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1027 19:11:50.020385       1 policy_source.go:240] refreshing policies
	I1027 19:11:50.029748       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:11:50.038132       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 19:11:50.043717       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1027 19:11:50.135960       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 19:11:50.786831       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 19:11:51.391278       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 19:11:51.442935       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1027 19:11:51.470900       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 19:11:51.478456       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 19:11:53.405340       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 19:11:53.556350       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 19:11:56.001360       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 19:12:10.771036       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.125.173"}
	I1027 19:12:16.293068       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.108.218.230"}
	I1027 19:12:17.549055       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 19:12:18.028917       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.109.120"}
	I1027 19:12:18.065792       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.113.167"}
	I1027 19:12:55.519802       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.101.18.82"}
	I1027 19:17:20.567427       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.110.206.225"}
	I1027 19:21:49.945423       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [1ba435049ad20a4af85b2f2d201801a5da9edf37f9f61b27284ccb0c798fbbaf] <==
	I1027 19:11:53.315333       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1027 19:11:53.315385       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1027 19:11:53.315342       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1027 19:11:53.316605       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 19:11:53.318987       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 19:11:53.322538       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 19:11:53.326745       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 19:11:53.327969       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 19:11:53.330177       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 19:11:53.338701       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1027 19:11:53.344965       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:11:53.347598       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 19:11:53.352566       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 19:11:53.352728       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 19:11:53.352755       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 19:11:53.361178       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 19:11:53.366609       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	E1027 19:12:17.675984       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:12:17.699011       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:12:17.727301       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:12:17.729874       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:12:17.738567       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:12:17.749280       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:12:17.760742       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:12:17.769967       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [60dc886cc82b05339032a646f43bb214cf04fc6adf15adc4e8500e3ad3d16a78] <==
	I1027 19:11:18.053781       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1027 19:11:18.053872       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 19:11:18.053945       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 19:11:18.054071       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1027 19:11:18.054160       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 19:11:18.054285       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 19:11:18.055457       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 19:11:18.055619       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1027 19:11:18.056424       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 19:11:18.056636       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:11:18.056725       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 19:11:18.056740       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 19:11:18.065223       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 19:11:18.065380       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 19:11:18.065450       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-074768"
	I1027 19:11:18.065513       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1027 19:11:18.067483       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:11:18.067852       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1027 19:11:18.068636       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 19:11:18.071317       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 19:11:18.075240       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 19:11:18.078873       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1027 19:11:18.082257       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 19:11:18.095993       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 19:11:18.103590       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [5e03d8878eeba1200689ad3ffc75889ca6da8adb3ca6422abe3c15b802418bfe] <==
	I1027 19:11:50.619204       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 19:11:50.719751       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 19:11:50.719779       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.117"]
	E1027 19:11:50.719851       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 19:11:50.759219       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1027 19:11:50.759283       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1027 19:11:50.759314       1 server_linux.go:132] "Using iptables Proxier"
	I1027 19:11:50.769903       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 19:11:50.770217       1 server.go:527] "Version info" version="v1.34.1"
	I1027 19:11:50.770431       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:11:50.775267       1 config.go:200] "Starting service config controller"
	I1027 19:11:50.775302       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 19:11:50.775316       1 config.go:106] "Starting endpoint slice config controller"
	I1027 19:11:50.775319       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 19:11:50.775912       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 19:11:50.775940       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 19:11:50.781346       1 config.go:309] "Starting node config controller"
	I1027 19:11:50.785856       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 19:11:50.786257       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 19:11:50.876443       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 19:11:50.876173       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 19:11:50.876931       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [a05ec0c93cfd736d12c8313747186f27db2021dc9a371d8d3a4438f992549069] <==
	I1027 19:11:44.169183       1 server_linux.go:53] "Using iptables proxy"
	I1027 19:11:44.246720       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1027 19:11:44.249261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-074768&limit=500&resourceVersion=0\": dial tcp 192.168.39.117:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [3ad24ef975b266d69bc17f4c4acc398bd352926f4e8a86e50c84e2a2bcd7f3c5] <==
	I1027 19:11:48.514616       1 serving.go:386] Generated self-signed cert in-memory
	W1027 19:11:49.895123       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 19:11:49.895488       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 19:11:49.895519       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 19:11:49.895728       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 19:11:49.951216       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 19:11:49.952748       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:11:49.964352       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:11:49.964482       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:11:49.964562       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 19:11:49.964647       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 19:11:50.065435       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [fa2d3b5fc751c052fe689b7cd2e4fd4d601c169b76aa924543645ff13d971195] <==
	I1027 19:11:12.865255       1 serving.go:386] Generated self-signed cert in-memory
	W1027 19:11:14.587934       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 19:11:14.588023       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 19:11:14.588034       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 19:11:14.588040       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 19:11:14.713266       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 19:11:14.713415       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:11:14.715890       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:11:14.715936       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:11:14.716109       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 19:11:14.716175       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 19:11:14.817022       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:11:34.878344       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:11:34.880247       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1027 19:11:34.880379       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1027 19:11:34.880567       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1027 19:11:34.882011       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1027 19:11:34.882253       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 27 19:21:34 functional-074768 kubelet[6302]: E1027 19:21:34.141441    6302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-zrxgm" podUID="3384566f-1f7b-49e8-b729-a97f0e0924c2"
	Oct 27 19:21:36 functional-074768 kubelet[6302]: E1027 19:21:36.138633    6302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-ppg9q" podUID="66fec0d9-6763-4ac7-be30-631c20dcc46e"
	Oct 27 19:21:36 functional-074768 kubelet[6302]: E1027 19:21:36.437767    6302 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761592896437375769  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Oct 27 19:21:36 functional-074768 kubelet[6302]: E1027 19:21:36.437788    6302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761592896437375769  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Oct 27 19:21:46 functional-074768 kubelet[6302]: E1027 19:21:46.262991    6302 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod1419f6f2dbf7cdc64e36c7697d572358/crio-c1e8123b0e1087e5aab8a8a7c7bcef0af0672ff02d81366f3f783974115ad5e4: Error finding container c1e8123b0e1087e5aab8a8a7c7bcef0af0672ff02d81366f3f783974115ad5e4: Status 404 returned error can't find the container with id c1e8123b0e1087e5aab8a8a7c7bcef0af0672ff02d81366f3f783974115ad5e4
	Oct 27 19:21:46 functional-074768 kubelet[6302]: E1027 19:21:46.264942    6302 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod8ca78600-3d29-4edc-9a1c-572cf646e83e/crio-f27f5919b6af890342c38a8769648e9c4a65ce64cb55dea1cf0342776751bca4: Error finding container f27f5919b6af890342c38a8769648e9c4a65ce64cb55dea1cf0342776751bca4: Status 404 returned error can't find the container with id f27f5919b6af890342c38a8769648e9c4a65ce64cb55dea1cf0342776751bca4
	Oct 27 19:21:46 functional-074768 kubelet[6302]: E1027 19:21:46.265449    6302 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod84d9d1d3cdf44b76588eee6ed2c2ed23/crio-af345d07331bc89836a28c1110a92060b79b70e6fc12a33f95e0f664b95113e3: Error finding container af345d07331bc89836a28c1110a92060b79b70e6fc12a33f95e0f664b95113e3: Status 404 returned error can't find the container with id af345d07331bc89836a28c1110a92060b79b70e6fc12a33f95e0f664b95113e3
	Oct 27 19:21:46 functional-074768 kubelet[6302]: E1027 19:21:46.266380    6302 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod5f607e8f-f4a5-475f-8bdb-d9c2889d5ada/crio-a4731d6d18ca9bedf0f80e8d150c0dd35d01d2a54d12375e767efe9ab4d14ac1: Error finding container a4731d6d18ca9bedf0f80e8d150c0dd35d01d2a54d12375e767efe9ab4d14ac1: Status 404 returned error can't find the container with id a4731d6d18ca9bedf0f80e8d150c0dd35d01d2a54d12375e767efe9ab4d14ac1
	Oct 27 19:21:46 functional-074768 kubelet[6302]: E1027 19:21:46.267467    6302 manager.go:1116] Failed to create existing container: /kubepods/burstable/podd75616c8b2e6db9ba925e56dac14f36d/crio-325376ebe73d19cdba4b91a0c3b50e525bc1fe4cedb8425acabd64180bb74da3: Error finding container 325376ebe73d19cdba4b91a0c3b50e525bc1fe4cedb8425acabd64180bb74da3: Status 404 returned error can't find the container with id 325376ebe73d19cdba4b91a0c3b50e525bc1fe4cedb8425acabd64180bb74da3
	Oct 27 19:21:46 functional-074768 kubelet[6302]: E1027 19:21:46.439383    6302 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761592906439112770  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Oct 27 19:21:46 functional-074768 kubelet[6302]: E1027 19:21:46.439426    6302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761592906439112770  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Oct 27 19:21:49 functional-074768 kubelet[6302]: E1027 19:21:49.138276    6302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-ppg9q" podUID="66fec0d9-6763-4ac7-be30-631c20dcc46e"
	Oct 27 19:21:55 functional-074768 kubelet[6302]: E1027 19:21:55.826639    6302 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 27 19:21:55 functional-074768 kubelet[6302]: E1027 19:21:55.826756    6302 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 27 19:21:55 functional-074768 kubelet[6302]: E1027 19:21:55.827006    6302 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-7xqm9_kubernetes-dashboard(672ab189-2efb-4820-827e-d59baf07200c): ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 27 19:21:55 functional-074768 kubelet[6302]: E1027 19:21:55.827046    6302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7xqm9" podUID="672ab189-2efb-4820-827e-d59baf07200c"
	Oct 27 19:21:56 functional-074768 kubelet[6302]: E1027 19:21:56.441320    6302 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761592916440728108  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Oct 27 19:21:56 functional-074768 kubelet[6302]: E1027 19:21:56.441362    6302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761592916440728108  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Oct 27 19:22:03 functional-074768 kubelet[6302]: E1027 19:22:03.139019    6302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-ppg9q" podUID="66fec0d9-6763-4ac7-be30-631c20dcc46e"
	Oct 27 19:22:06 functional-074768 kubelet[6302]: E1027 19:22:06.443796    6302 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761592926443251024  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Oct 27 19:22:06 functional-074768 kubelet[6302]: E1027 19:22:06.443844    6302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761592926443251024  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Oct 27 19:22:11 functional-074768 kubelet[6302]: E1027 19:22:11.141171    6302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7xqm9" podUID="672ab189-2efb-4820-827e-d59baf07200c"
	Oct 27 19:22:14 functional-074768 kubelet[6302]: E1027 19:22:14.138450    6302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-ppg9q" podUID="66fec0d9-6763-4ac7-be30-631c20dcc46e"
	Oct 27 19:22:16 functional-074768 kubelet[6302]: E1027 19:22:16.447537    6302 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761592936445569675  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Oct 27 19:22:16 functional-074768 kubelet[6302]: E1027 19:22:16.447560    6302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761592936445569675  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	
	
	==> storage-provisioner [ba777ad788f45b33f81f28d7aecebc6c59953bb838ed7f23c33832f2534a5179] <==
	I1027 19:11:15.295791       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 19:11:15.304293       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 19:11:15.304711       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1027 19:11:15.307781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:11:18.764063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:11:23.031174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:11:26.630899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:11:29.685570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:11:32.711342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:11:32.720392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 19:11:32.720513       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 19:11:32.721774       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-074768_b4323ab1-ff60-4e7e-907d-d6d67bc9f70a!
	I1027 19:11:32.722233       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"da7c4387-b27a-4f7e-ae17-d5eda90d8a7d", APIVersion:"v1", ResourceVersion:"574", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-074768_b4323ab1-ff60-4e7e-907d-d6d67bc9f70a became leader
	W1027 19:11:32.734342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:11:32.743127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 19:11:32.822877       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-074768_b4323ab1-ff60-4e7e-907d-d6d67bc9f70a!
	W1027 19:11:34.746637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:11:34.755281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d7b0eb17be9e97894e47220b4bbb2efa859e1155cbe7cff15e03d68b41c28284] <==
	W1027 19:21:53.286978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:21:55.291115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:21:55.298124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:21:57.301315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:21:57.306965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:21:59.311066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:21:59.316069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:01.319721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:01.324827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:03.329008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:03.335148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:05.339508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:05.344442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:07.348508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:07.355105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:09.358647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:09.365351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:11.369060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:11.373971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:13.377300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:13.382976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:15.387020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:15.393002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:17.400453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:17.408831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-074768 -n functional-074768
helpers_test.go:269: (dbg) Run:  kubectl --context functional-074768 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-kbc8s hello-node-connect-7d85dfc575-ppg9q mysql-5bb876957f-zrxgm sp-pod dashboard-metrics-scraper-77bf4d6c4c-vfwcs kubernetes-dashboard-855c9754f9-7xqm9
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-074768 describe pod busybox-mount hello-node-75c85bcc94-kbc8s hello-node-connect-7d85dfc575-ppg9q mysql-5bb876957f-zrxgm sp-pod dashboard-metrics-scraper-77bf4d6c4c-vfwcs kubernetes-dashboard-855c9754f9-7xqm9
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-074768 describe pod busybox-mount hello-node-75c85bcc94-kbc8s hello-node-connect-7d85dfc575-ppg9q mysql-5bb876957f-zrxgm sp-pod dashboard-metrics-scraper-77bf4d6c4c-vfwcs kubernetes-dashboard-855c9754f9-7xqm9: exit status 1 (113.298076ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-074768/192.168.39.117
	Start Time:       Mon, 27 Oct 2025 19:12:16 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://b1324210b99e8b9f9c9065aa41dc3791c8db409d8aa5ab8f236b2399ccb7bdb7
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 27 Oct 2025 19:12:49 +0000
	      Finished:     Mon, 27 Oct 2025 19:12:49 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zc6qh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-zc6qh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  10m    default-scheduler  Successfully assigned default/busybox-mount to functional-074768
	  Normal  Pulling    10m    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m30s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.582s (32.731s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m30s  kubelet            Created container: mount-munger
	  Normal  Started    9m30s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-kbc8s
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-074768/192.168.39.117
	Start Time:       Mon, 27 Oct 2025 19:17:20 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:           10.244.0.13
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8l6jd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8l6jd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  4m59s                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-kbc8s to functional-074768
	  Warning  Failed     2m38s                  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m38s                  kubelet            Error: ErrImagePull
	  Normal   BackOff    2m38s                  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     2m38s                  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    2m23s (x2 over 4m58s)  kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-ppg9q
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-074768/192.168.39.117
	Start Time:       Mon, 27 Oct 2025 19:12:16 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t7d4k (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-t7d4k:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-ppg9q to functional-074768
	  Warning  Failed     9m32s                kubelet            Failed to pull image "kicbase/echo-server": copying system image from manifest list: determining manifest MIME type for docker://kicbase/echo-server:latest: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m16s (x4 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     54s (x4 over 9m32s)  kubelet            Error: ErrImagePull
	  Warning  Failed     54s (x3 over 6m53s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    5s (x9 over 9m32s)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     5s (x9 over 9m32s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             mysql-5bb876957f-zrxgm
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-074768/192.168.39.117
	Start Time:       Mon, 27 Oct 2025 19:12:55 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:           10.244.0.12
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rxvwx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rxvwx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m24s                  default-scheduler  Successfully assigned default/mysql-5bb876957f-zrxgm to functional-074768
	  Warning  Failed     4m39s (x2 over 7m23s)  kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     84s (x3 over 7m23s)    kubelet            Error: ErrImagePull
	  Warning  Failed     84s                    kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    45s (x5 over 7m22s)    kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     45s (x5 over 7m22s)    kubelet            Error: ImagePullBackOff
	  Normal   Pulling    34s (x4 over 9m23s)    kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-074768/192.168.39.117
	Start Time:       Mon, 27 Oct 2025 19:12:27 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gmtcg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-gmtcg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m52s                  default-scheduler  Successfully assigned default/sp-pod to functional-074768
	  Warning  Failed     5m23s (x2 over 7m59s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     114s (x3 over 7m59s)   kubelet            Error: ErrImagePull
	  Warning  Failed     114s                   kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    75s (x5 over 7m58s)    kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     75s (x5 over 7m58s)    kubelet            Error: ImagePullBackOff
	  Normal   Pulling    64s (x4 over 9m51s)    kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-vfwcs" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-7xqm9" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-074768 describe pod busybox-mount hello-node-75c85bcc94-kbc8s hello-node-connect-7d85dfc575-ppg9q mysql-5bb876957f-zrxgm sp-pod dashboard-metrics-scraper-77bf4d6c4c-vfwcs kubernetes-dashboard-855c9754f9-7xqm9: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.04s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (367.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [8ca78600-3d29-4edc-9a1c-572cf646e83e] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004440451s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-074768 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-074768 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-074768 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-074768 apply -f testdata/storage-provisioner/pod.yaml
I1027 19:12:27.754410   62705 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [33487490-a188-40e0-957c-3ebacba05ea4] Pending
helpers_test.go:352: "sp-pod" [33487490-a188-40e0-957c-3ebacba05ea4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-074768 -n functional-074768
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-10-27 19:18:27.984822433 +0000 UTC m=+1335.909364539
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-074768 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-074768 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-074768/192.168.39.117
Start Time:       Mon, 27 Oct 2025 19:12:27 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:  10.244.0.11
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gmtcg (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-gmtcg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                 From               Message
----     ------     ----                ----               -------
Normal   Scheduled  6m1s                default-scheduler  Successfully assigned default/sp-pod to functional-074768
Warning  Failed     92s (x2 over 4m8s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     92s (x2 over 4m8s)  kubelet            Error: ErrImagePull
Normal   BackOff    78s (x2 over 4m7s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     78s (x2 over 4m7s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    65s (x3 over 6m)    kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-074768 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-074768 logs sp-pod -n default: exit status 1 (76.842019ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-074768 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-074768 -n functional-074768
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-074768 logs -n 25: (1.493321688s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-074768 ssh -- ls -la /mount-9p                                                                                  │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │ 27 Oct 25 19:12 UTC │
	│ ssh     │ functional-074768 ssh sudo umount -f /mount-9p                                                                             │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │                     │
	│ ssh     │ functional-074768 ssh findmnt -T /mount1                                                                                   │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │                     │
	│ mount   │ -p functional-074768 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2393103263/001:/mount1 --alsologtostderr -v=1         │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │                     │
	│ mount   │ -p functional-074768 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2393103263/001:/mount3 --alsologtostderr -v=1         │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │                     │
	│ mount   │ -p functional-074768 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2393103263/001:/mount2 --alsologtostderr -v=1         │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │                     │
	│ ssh     │ functional-074768 ssh findmnt -T /mount1                                                                                   │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │ 27 Oct 25 19:12 UTC │
	│ ssh     │ functional-074768 ssh findmnt -T /mount2                                                                                   │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │ 27 Oct 25 19:12 UTC │
	│ ssh     │ functional-074768 ssh findmnt -T /mount3                                                                                   │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │ 27 Oct 25 19:12 UTC │
	│ mount   │ -p functional-074768 --kill=true                                                                                           │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │                     │
	│ addons  │ functional-074768 addons list                                                                                              │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │ 27 Oct 25 19:12 UTC │
	│ addons  │ functional-074768 addons list -o json                                                                                      │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │ 27 Oct 25 19:12 UTC │
	│ ssh     │ functional-074768 ssh sudo cat /etc/ssl/certs/62705.pem                                                                    │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ ssh     │ functional-074768 ssh sudo cat /usr/share/ca-certificates/62705.pem                                                        │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ ssh     │ functional-074768 ssh sudo cat /etc/ssl/certs/51391683.0                                                                   │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ ssh     │ functional-074768 ssh sudo cat /etc/ssl/certs/627052.pem                                                                   │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ ssh     │ functional-074768 ssh sudo cat /usr/share/ca-certificates/627052.pem                                                       │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ ssh     │ functional-074768 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                   │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ ssh     │ functional-074768 ssh sudo cat /etc/test/nested/copy/62705/hosts                                                           │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ cp      │ functional-074768 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ ssh     │ functional-074768 ssh -n functional-074768 sudo cat /home/docker/cp-test.txt                                               │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ cp      │ functional-074768 cp functional-074768:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3501193272/001/cp-test.txt │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ ssh     │ functional-074768 ssh -n functional-074768 sudo cat /home/docker/cp-test.txt                                               │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ cp      │ functional-074768 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ ssh     │ functional-074768 ssh -n functional-074768 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 19:12:15
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 19:12:15.978537   69010 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:12:15.978805   69010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:12:15.978815   69010 out.go:374] Setting ErrFile to fd 2...
	I1027 19:12:15.978819   69010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:12:15.979054   69010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-58821/.minikube/bin
	I1027 19:12:15.979492   69010 out.go:368] Setting JSON to false
	I1027 19:12:15.980424   69010 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6886,"bootTime":1761585450,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 19:12:15.980516   69010 start.go:141] virtualization: kvm guest
	I1027 19:12:15.985372   69010 out.go:179] * [functional-074768] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 19:12:15.986974   69010 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:12:15.986983   69010 notify.go:220] Checking for updates...
	I1027 19:12:15.988280   69010 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:12:15.989521   69010 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-58821/kubeconfig
	I1027 19:12:15.990743   69010 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-58821/.minikube
	I1027 19:12:15.991901   69010 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 19:12:15.993094   69010 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:12:15.994971   69010 config.go:182] Loaded profile config "functional-074768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:12:15.995574   69010 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:12:16.035343   69010 out.go:179] * Using the kvm2 driver based on existing profile
	I1027 19:12:16.036772   69010 start.go:305] selected driver: kvm2
	I1027 19:12:16.036792   69010 start.go:925] validating driver "kvm2" against &{Name:functional-074768 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-074768 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:12:16.036933   69010 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:12:16.038400   69010 cni.go:84] Creating CNI manager for ""
	I1027 19:12:16.038475   69010 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 19:12:16.038550   69010 start.go:349] cluster config:
	{Name:functional-074768 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-074768 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:12:16.040073   69010 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 27 19:18:28 functional-074768 crio[5469]: time="2025-10-27 19:18:28.818295659Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761592708818272743,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:175576,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5edd69d4-66f6-4bb4-89be-dccad180f642 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:18:28 functional-074768 crio[5469]: time="2025-10-27 19:18:28.819347278Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a6cb9f93-95b1-460a-9d42-082bba098c27 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:18:28 functional-074768 crio[5469]: time="2025-10-27 19:18:28.819403333Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a6cb9f93-95b1-460a-9d42-082bba098c27 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:18:28 functional-074768 crio[5469]: time="2025-10-27 19:18:28.819738127Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1324210b99e8b9f9c9065aa41dc3791c8db409d8aa5ab8f236b2399ccb7bdb7,PodSandboxId:542a950a6c94060e92400d067bc90eb44972884fe88539e14c8e6369202899ab,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761592369891065713,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c84388b6-2d7c-40a2-b560-fd225b55349a,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e03d8878eeba1200689ad3ffc75889ca6da8adb3ca6422abe3c15b802418bfe,PodSandboxId:4f2d4092ef8ce6f2ecebbd65772d6c0e4c75becabf6c5c71e3a7ab8a6578edd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761592310407165550,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp2k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538c55aa-9e90-4bb3-83b7-f84cce86edca,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b0eb17be9e97894e47220b4bbb2efa859e1155cbe7cff15e03d68b41c28284,PodSandboxId:f6ffd9e1d08daec9d8774fc7454fb6003f75ac9cc21e418870e28db77f9bf746,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761592310403081749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ca78600-3d29-4edc-9a1c-572cf646e83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:610b55ad8d57b43b4f9f14aca129a7c3f93464db3fb0040f0869e97828d3942e,PodSandboxId:ea9b6a12a7c9a66ed0d575bd7da37ec71c0a5c995cb086a4d164a6214f32b7a4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761592307034057590,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c94e58bf276ae150ba3be616b5d9315d,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba435049ad20a4af85b2f2d201801a5da9edf37f9f61b27284ccb0c798fbbaf,PodSandboxId:3da3325cd282ffb88aa1eed45cb1758e8dd5886923e0537475bf8b2062a708f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761592306825227460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d75616c8b2e6db9ba925e56dac14f36d,},Annotations:map[string]string{io.kubern
etes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad24ef975b266d69bc17f4c4acc398bd352926f4e8a86e50c84e2a2bcd7f3c5,PodSandboxId:62e2964894758c1aaed419fd373fdedd326f471efed3168345a8f2e5c1a18256,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761592306781617058,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074768,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 84d9d1d3cdf44b76588eee6ed2c2ed23,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c257ffa5d58fa9fcc17ef7974358549bf09938dff7426d6a957c613b8071e622,PodSandboxId:44c1dc33c037b1964e803b03fbe87ef48ced6c62d21ce6bbe99abb10c7d606bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761592306763270982,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1419f6f2dbf7cdc64e36c7697d572358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b855533e3a4aa01919810611cc1998e2734d17f27ed9d4381f184b8ae2c789e,PodSandboxId:c0d55bbc8d1fa75dc2bac99e9e6c5070f6a924127df831c2e344b53b4d4dcf73,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,Crea
tedAt:1761592304383828785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lv8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a05ec0c93cfd736d12c8313747186f27db2021dc9a371d8d3a4438f992549069,PodSandboxId:4f2d4092ef8ce6f2ecebbd65772d6c0e4c7
5becabf6c5c71e3a7ab8a6578edd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761592303924560683,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp2k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538c55aa-9e90-4bb3-83b7-f84cce86edca,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba777ad788f45b33f81f28d7aecebc6c59953bb838ed7f23c33832f2534a5179,PodSandboxId:f27f5919b6af890342c38a8769648e9c4a65ce64cb55dea1cf0342776751bca4,Meta
data:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761592275224568234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ca78600-3d29-4edc-9a1c-572cf646e83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6a57ec258cde7cb5f391bf83a85ba008e698c156047225611c8eab6c6babc0,PodSandboxId:c1e8123b0e1087e5aab8a8a7c7bcef0af0672ff02d81366f3f783974115ad5e4,Metadata:&Contain
erMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761592271488084282,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1419f6f2dbf7cdc64e36c7697d572358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa2d3b5fc751c052fe689b7cd2e4fd4d601c169b76aa924543645ff13d971195,PodSandboxId:af345
d07331bc89836a28c1110a92060b79b70e6fc12a33f95e0f664b95113e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761592271427253610,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84d9d1d3cdf44b76588eee6ed2c2ed23,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:60dc886cc82b05339032a646f43bb214cf04fc6adf15adc4e8500e3ad3d16a78,PodSandboxId:325376ebe73d19cdba4b91a0c3b50e525bc1fe4cedb8425acabd64180bb74da3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761592271436590190,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d75616c8b2e6db9ba925e56dac14f36d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b88ac5c8b637643251fc39d80e23070b4065a79ea8af3e53205465bf22ed227c,PodSandboxId:a4731d6d18ca9bedf0f80e8d150c0dd35d01d2a54d12375e767efe9ab4d14ac1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761592267049795999,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lv8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a6cb9f93-95b1-460a-9d42-082bba098c27 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:18:28 functional-074768 crio[5469]: time="2025-10-27 19:18:28.869752579Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=81d5d1c2-28e9-4917-ab6d-2a5127c7c4be name=/runtime.v1.RuntimeService/Version
	Oct 27 19:18:28 functional-074768 crio[5469]: time="2025-10-27 19:18:28.869853427Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=81d5d1c2-28e9-4917-ab6d-2a5127c7c4be name=/runtime.v1.RuntimeService/Version
	Oct 27 19:18:28 functional-074768 crio[5469]: time="2025-10-27 19:18:28.871879914Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cbbf8447-40a5-49d2-9504-1e6c2c541ff0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:18:28 functional-074768 crio[5469]: time="2025-10-27 19:18:28.872771549Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761592708872744196,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:175576,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cbbf8447-40a5-49d2-9504-1e6c2c541ff0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:18:28 functional-074768 crio[5469]: time="2025-10-27 19:18:28.873488247Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d2dff68d-38f6-4f6c-a37c-067ce924e284 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:18:28 functional-074768 crio[5469]: time="2025-10-27 19:18:28.873563712Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d2dff68d-38f6-4f6c-a37c-067ce924e284 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:18:28 functional-074768 crio[5469]: time="2025-10-27 19:18:28.873891934Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1324210b99e8b9f9c9065aa41dc3791c8db409d8aa5ab8f236b2399ccb7bdb7,PodSandboxId:542a950a6c94060e92400d067bc90eb44972884fe88539e14c8e6369202899ab,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761592369891065713,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c84388b6-2d7c-40a2-b560-fd225b55349a,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e03d8878eeba1200689ad3ffc75889ca6da8adb3ca6422abe3c15b802418bfe,PodSandboxId:4f2d4092ef8ce6f2ecebbd65772d6c0e4c75becabf6c5c71e3a7ab8a6578edd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761592310407165550,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp2k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538c55aa-9e90-4bb3-83b7-f84cce86edca,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b0eb17be9e97894e47220b4bbb2efa859e1155cbe7cff15e03d68b41c28284,PodSandboxId:f6ffd9e1d08daec9d8774fc7454fb6003f75ac9cc21e418870e28db77f9bf746,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761592310403081749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ca78600-3d29-4edc-9a1c-572cf646e83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:610b55ad8d57b43b4f9f14aca129a7c3f93464db3fb0040f0869e97828d3942e,PodSandboxId:ea9b6a12a7c9a66ed0d575bd7da37ec71c0a5c995cb086a4d164a6214f32b7a4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761592307034057590,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c94e58bf276ae150ba3be616b5d9315d,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba435049ad20a4af85b2f2d201801a5da9edf37f9f61b27284ccb0c798fbbaf,PodSandboxId:3da3325cd282ffb88aa1eed45cb1758e8dd5886923e0537475bf8b2062a708f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761592306825227460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d75616c8b2e6db9ba925e56dac14f36d,},Annotations:map[string]string{io.kubern
etes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad24ef975b266d69bc17f4c4acc398bd352926f4e8a86e50c84e2a2bcd7f3c5,PodSandboxId:62e2964894758c1aaed419fd373fdedd326f471efed3168345a8f2e5c1a18256,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761592306781617058,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074768,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 84d9d1d3cdf44b76588eee6ed2c2ed23,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c257ffa5d58fa9fcc17ef7974358549bf09938dff7426d6a957c613b8071e622,PodSandboxId:44c1dc33c037b1964e803b03fbe87ef48ced6c62d21ce6bbe99abb10c7d606bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761592306763270982,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1419f6f2dbf7cdc64e36c7697d572358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b855533e3a4aa01919810611cc1998e2734d17f27ed9d4381f184b8ae2c789e,PodSandboxId:c0d55bbc8d1fa75dc2bac99e9e6c5070f6a924127df831c2e344b53b4d4dcf73,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,Crea
tedAt:1761592304383828785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lv8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a05ec0c93cfd736d12c8313747186f27db2021dc9a371d8d3a4438f992549069,PodSandboxId:4f2d4092ef8ce6f2ecebbd65772d6c0e4c7
5becabf6c5c71e3a7ab8a6578edd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761592303924560683,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp2k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538c55aa-9e90-4bb3-83b7-f84cce86edca,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba777ad788f45b33f81f28d7aecebc6c59953bb838ed7f23c33832f2534a5179,PodSandboxId:f27f5919b6af890342c38a8769648e9c4a65ce64cb55dea1cf0342776751bca4,Meta
data:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761592275224568234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ca78600-3d29-4edc-9a1c-572cf646e83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6a57ec258cde7cb5f391bf83a85ba008e698c156047225611c8eab6c6babc0,PodSandboxId:c1e8123b0e1087e5aab8a8a7c7bcef0af0672ff02d81366f3f783974115ad5e4,Metadata:&Contain
erMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761592271488084282,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1419f6f2dbf7cdc64e36c7697d572358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa2d3b5fc751c052fe689b7cd2e4fd4d601c169b76aa924543645ff13d971195,PodSandboxId:af345
d07331bc89836a28c1110a92060b79b70e6fc12a33f95e0f664b95113e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761592271427253610,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84d9d1d3cdf44b76588eee6ed2c2ed23,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:60dc886cc82b05339032a646f43bb214cf04fc6adf15adc4e8500e3ad3d16a78,PodSandboxId:325376ebe73d19cdba4b91a0c3b50e525bc1fe4cedb8425acabd64180bb74da3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761592271436590190,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d75616c8b2e6db9ba925e56dac14f36d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b88ac5c8b637643251fc39d80e23070b4065a79ea8af3e53205465bf22ed227c,PodSandboxId:a4731d6d18ca9bedf0f80e8d150c0dd35d01d2a54d12375e767efe9ab4d14ac1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761592267049795999,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lv8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d2dff68d-38f6-4f6c-a37c-067ce924e284 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:18:28 functional-074768 crio[5469]: time="2025-10-27 19:18:28.912722547Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=108bb165-48a4-429e-a51c-fbd38d6c491c name=/runtime.v1.RuntimeService/Version
	Oct 27 19:18:28 functional-074768 crio[5469]: time="2025-10-27 19:18:28.912795025Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=108bb165-48a4-429e-a51c-fbd38d6c491c name=/runtime.v1.RuntimeService/Version
	Oct 27 19:18:28 functional-074768 crio[5469]: time="2025-10-27 19:18:28.914359550Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=65c5cbdf-825e-4219-91c1-0ee7e65d49b3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:18:28 functional-074768 crio[5469]: time="2025-10-27 19:18:28.915119142Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761592708915068545,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:175576,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=65c5cbdf-825e-4219-91c1-0ee7e65d49b3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:18:28 functional-074768 crio[5469]: time="2025-10-27 19:18:28.915939445Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2ac619da-599c-41e2-ab0b-503b1eb67a29 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:18:28 functional-074768 crio[5469]: time="2025-10-27 19:18:28.916006726Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2ac619da-599c-41e2-ab0b-503b1eb67a29 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:18:28 functional-074768 crio[5469]: time="2025-10-27 19:18:28.916485105Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1324210b99e8b9f9c9065aa41dc3791c8db409d8aa5ab8f236b2399ccb7bdb7,PodSandboxId:542a950a6c94060e92400d067bc90eb44972884fe88539e14c8e6369202899ab,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761592369891065713,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c84388b6-2d7c-40a2-b560-fd225b55349a,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e03d8878eeba1200689ad3ffc75889ca6da8adb3ca6422abe3c15b802418bfe,PodSandboxId:4f2d4092ef8ce6f2ecebbd65772d6c0e4c75becabf6c5c71e3a7ab8a6578edd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761592310407165550,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp2k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538c55aa-9e90-4bb3-83b7-f84cce86edca,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b0eb17be9e97894e47220b4bbb2efa859e1155cbe7cff15e03d68b41c28284,PodSandboxId:f6ffd9e1d08daec9d8774fc7454fb6003f75ac9cc21e418870e28db77f9bf746,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761592310403081749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ca78600-3d29-4edc-9a1c-572cf646e83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:610b55ad8d57b43b4f9f14aca129a7c3f93464db3fb0040f0869e97828d3942e,PodSandboxId:ea9b6a12a7c9a66ed0d575bd7da37ec71c0a5c995cb086a4d164a6214f32b7a4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761592307034057590,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c94e58bf276ae150ba3be616b5d9315d,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba435049ad20a4af85b2f2d201801a5da9edf37f9f61b27284ccb0c798fbbaf,PodSandboxId:3da3325cd282ffb88aa1eed45cb1758e8dd5886923e0537475bf8b2062a708f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761592306825227460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d75616c8b2e6db9ba925e56dac14f36d,},Annotations:map[string]string{io.kubern
etes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad24ef975b266d69bc17f4c4acc398bd352926f4e8a86e50c84e2a2bcd7f3c5,PodSandboxId:62e2964894758c1aaed419fd373fdedd326f471efed3168345a8f2e5c1a18256,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761592306781617058,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074768,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 84d9d1d3cdf44b76588eee6ed2c2ed23,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c257ffa5d58fa9fcc17ef7974358549bf09938dff7426d6a957c613b8071e622,PodSandboxId:44c1dc33c037b1964e803b03fbe87ef48ced6c62d21ce6bbe99abb10c7d606bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761592306763270982,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1419f6f2dbf7cdc64e36c7697d572358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b855533e3a4aa01919810611cc1998e2734d17f27ed9d4381f184b8ae2c789e,PodSandboxId:c0d55bbc8d1fa75dc2bac99e9e6c5070f6a924127df831c2e344b53b4d4dcf73,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,Crea
tedAt:1761592304383828785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lv8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a05ec0c93cfd736d12c8313747186f27db2021dc9a371d8d3a4438f992549069,PodSandboxId:4f2d4092ef8ce6f2ecebbd65772d6c0e4c7
5becabf6c5c71e3a7ab8a6578edd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761592303924560683,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp2k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538c55aa-9e90-4bb3-83b7-f84cce86edca,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba777ad788f45b33f81f28d7aecebc6c59953bb838ed7f23c33832f2534a5179,PodSandboxId:f27f5919b6af890342c38a8769648e9c4a65ce64cb55dea1cf0342776751bca4,Meta
data:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761592275224568234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ca78600-3d29-4edc-9a1c-572cf646e83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6a57ec258cde7cb5f391bf83a85ba008e698c156047225611c8eab6c6babc0,PodSandboxId:c1e8123b0e1087e5aab8a8a7c7bcef0af0672ff02d81366f3f783974115ad5e4,Metadata:&Contain
erMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761592271488084282,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1419f6f2dbf7cdc64e36c7697d572358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa2d3b5fc751c052fe689b7cd2e4fd4d601c169b76aa924543645ff13d971195,PodSandboxId:af345
d07331bc89836a28c1110a92060b79b70e6fc12a33f95e0f664b95113e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761592271427253610,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84d9d1d3cdf44b76588eee6ed2c2ed23,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:60dc886cc82b05339032a646f43bb214cf04fc6adf15adc4e8500e3ad3d16a78,PodSandboxId:325376ebe73d19cdba4b91a0c3b50e525bc1fe4cedb8425acabd64180bb74da3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761592271436590190,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d75616c8b2e6db9ba925e56dac14f36d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b88ac5c8b637643251fc39d80e23070b4065a79ea8af3e53205465bf22ed227c,PodSandboxId:a4731d6d18ca9bedf0f80e8d150c0dd35d01d2a54d12375e767efe9ab4d14ac1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761592267049795999,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lv8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2ac619da-599c-41e2-ab0b-503b1eb67a29 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:18:28 functional-074768 crio[5469]: time="2025-10-27 19:18:28.954840046Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ea382e2c-2e4c-4a9b-be79-5ddb99530722 name=/runtime.v1.RuntimeService/Version
	Oct 27 19:18:28 functional-074768 crio[5469]: time="2025-10-27 19:18:28.954918470Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ea382e2c-2e4c-4a9b-be79-5ddb99530722 name=/runtime.v1.RuntimeService/Version
	Oct 27 19:18:28 functional-074768 crio[5469]: time="2025-10-27 19:18:28.956761991Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1d73cbe6-090d-4292-b62e-cc333a219232 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:18:28 functional-074768 crio[5469]: time="2025-10-27 19:18:28.957363234Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761592708957337046,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:175576,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1d73cbe6-090d-4292-b62e-cc333a219232 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:18:28 functional-074768 crio[5469]: time="2025-10-27 19:18:28.958328243Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d37c5814-2f20-4576-a9ea-9fd5e7107eb5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:18:28 functional-074768 crio[5469]: time="2025-10-27 19:18:28.958423161Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d37c5814-2f20-4576-a9ea-9fd5e7107eb5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:18:28 functional-074768 crio[5469]: time="2025-10-27 19:18:28.958761700Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1324210b99e8b9f9c9065aa41dc3791c8db409d8aa5ab8f236b2399ccb7bdb7,PodSandboxId:542a950a6c94060e92400d067bc90eb44972884fe88539e14c8e6369202899ab,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761592369891065713,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c84388b6-2d7c-40a2-b560-fd225b55349a,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e03d8878eeba1200689ad3ffc75889ca6da8adb3ca6422abe3c15b802418bfe,PodSandboxId:4f2d4092ef8ce6f2ecebbd65772d6c0e4c75becabf6c5c71e3a7ab8a6578edd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761592310407165550,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp2k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538c55aa-9e90-4bb3-83b7-f84cce86edca,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b0eb17be9e97894e47220b4bbb2efa859e1155cbe7cff15e03d68b41c28284,PodSandboxId:f6ffd9e1d08daec9d8774fc7454fb6003f75ac9cc21e418870e28db77f9bf746,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761592310403081749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ca78600-3d29-4edc-9a1c-572cf646e83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:610b55ad8d57b43b4f9f14aca129a7c3f93464db3fb0040f0869e97828d3942e,PodSandboxId:ea9b6a12a7c9a66ed0d575bd7da37ec71c0a5c995cb086a4d164a6214f32b7a4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761592307034057590,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c94e58bf276ae150ba3be616b5d9315d,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba435049ad20a4af85b2f2d201801a5da9edf37f9f61b27284ccb0c798fbbaf,PodSandboxId:3da3325cd282ffb88aa1eed45cb1758e8dd5886923e0537475bf8b2062a708f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761592306825227460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d75616c8b2e6db9ba925e56dac14f36d,},Annotations:map[string]string{io.kubern
etes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad24ef975b266d69bc17f4c4acc398bd352926f4e8a86e50c84e2a2bcd7f3c5,PodSandboxId:62e2964894758c1aaed419fd373fdedd326f471efed3168345a8f2e5c1a18256,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761592306781617058,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074768,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 84d9d1d3cdf44b76588eee6ed2c2ed23,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c257ffa5d58fa9fcc17ef7974358549bf09938dff7426d6a957c613b8071e622,PodSandboxId:44c1dc33c037b1964e803b03fbe87ef48ced6c62d21ce6bbe99abb10c7d606bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761592306763270982,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1419f6f2dbf7cdc64e36c7697d572358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b855533e3a4aa01919810611cc1998e2734d17f27ed9d4381f184b8ae2c789e,PodSandboxId:c0d55bbc8d1fa75dc2bac99e9e6c5070f6a924127df831c2e344b53b4d4dcf73,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,Crea
tedAt:1761592304383828785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lv8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a05ec0c93cfd736d12c8313747186f27db2021dc9a371d8d3a4438f992549069,PodSandboxId:4f2d4092ef8ce6f2ecebbd65772d6c0e4c7
5becabf6c5c71e3a7ab8a6578edd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761592303924560683,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp2k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538c55aa-9e90-4bb3-83b7-f84cce86edca,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba777ad788f45b33f81f28d7aecebc6c59953bb838ed7f23c33832f2534a5179,PodSandboxId:f27f5919b6af890342c38a8769648e9c4a65ce64cb55dea1cf0342776751bca4,Meta
data:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761592275224568234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ca78600-3d29-4edc-9a1c-572cf646e83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6a57ec258cde7cb5f391bf83a85ba008e698c156047225611c8eab6c6babc0,PodSandboxId:c1e8123b0e1087e5aab8a8a7c7bcef0af0672ff02d81366f3f783974115ad5e4,Metadata:&Contain
erMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761592271488084282,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1419f6f2dbf7cdc64e36c7697d572358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa2d3b5fc751c052fe689b7cd2e4fd4d601c169b76aa924543645ff13d971195,PodSandboxId:af345
d07331bc89836a28c1110a92060b79b70e6fc12a33f95e0f664b95113e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761592271427253610,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84d9d1d3cdf44b76588eee6ed2c2ed23,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:60dc886cc82b05339032a646f43bb214cf04fc6adf15adc4e8500e3ad3d16a78,PodSandboxId:325376ebe73d19cdba4b91a0c3b50e525bc1fe4cedb8425acabd64180bb74da3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761592271436590190,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d75616c8b2e6db9ba925e56dac14f36d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b88ac5c8b637643251fc39d80e23070b4065a79ea8af3e53205465bf22ed227c,PodSandboxId:a4731d6d18ca9bedf0f80e8d150c0dd35d01d2a54d12375e767efe9ab4d14ac1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761592267049795999,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lv8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d37c5814-2f20-4576-a9ea-9fd5e7107eb5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b1324210b99e8       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   542a950a6c940       busybox-mount
	5e03d8878eeba       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      6 minutes ago       Running             kube-proxy                3                   4f2d4092ef8ce       kube-proxy-lp2k8
	d7b0eb17be9e9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       3                   f6ffd9e1d08da       storage-provisioner
	610b55ad8d57b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      6 minutes ago       Running             kube-apiserver            0                   ea9b6a12a7c9a       kube-apiserver-functional-074768
	1ba435049ad20       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      6 minutes ago       Running             kube-controller-manager   3                   3da3325cd282f       kube-controller-manager-functional-074768
	3ad24ef975b26       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      6 minutes ago       Running             kube-scheduler            3                   62e2964894758       kube-scheduler-functional-074768
	c257ffa5d58fa       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      6 minutes ago       Running             etcd                      3                   44c1dc33c037b       etcd-functional-074768
	8b855533e3a4a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      6 minutes ago       Running             coredns                   2                   c0d55bbc8d1fa       coredns-66bc5c9577-2lv8d
	a05ec0c93cfd7       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      6 minutes ago       Exited              kube-proxy                2                   4f2d4092ef8ce       kube-proxy-lp2k8
	ba777ad788f45       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Exited              storage-provisioner       2                   f27f5919b6af8       storage-provisioner
	8f6a57ec258cd       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      7 minutes ago       Exited              etcd                      2                   c1e8123b0e108       etcd-functional-074768
	60dc886cc82b0       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      7 minutes ago       Exited              kube-controller-manager   2                   325376ebe73d1       kube-controller-manager-functional-074768
	fa2d3b5fc751c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      7 minutes ago       Exited              kube-scheduler            2                   af345d07331bc       kube-scheduler-functional-074768
	b88ac5c8b6376       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      7 minutes ago       Exited              coredns                   1                   a4731d6d18ca9       coredns-66bc5c9577-2lv8d
	
	
	==> coredns [8b855533e3a4aa01919810611cc1998e2734d17f27ed9d4381f184b8ae2c789e] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45293 - 5246 "HINFO IN 454679650042713632.2272985414247109723. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.039332227s
	
	
	==> coredns [b88ac5c8b637643251fc39d80e23070b4065a79ea8af3e53205465bf22ed227c] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53381 - 18232 "HINFO IN 6855518255260926845.7182282404724631670. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.025903505s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-074768
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-074768
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=functional-074768
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T19_10_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 19:10:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-074768
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 19:18:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 19:16:05 +0000   Mon, 27 Oct 2025 19:10:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 19:16:05 +0000   Mon, 27 Oct 2025 19:10:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 19:16:05 +0000   Mon, 27 Oct 2025 19:10:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 19:16:05 +0000   Mon, 27 Oct 2025 19:10:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.117
	  Hostname:    functional-074768
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 d60c87697e45438394c451d2f7a36472
	  System UUID:                d60c8769-7e45-4383-94c4-51d2f7a36472
	  Boot ID:                    59f8f872-6752-425e-9853-f7970fb836c8
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-kbc8s                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         69s
	  default                     hello-node-connect-7d85dfc575-ppg9q           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m13s
	  default                     mysql-5bb876957f-zrxgm                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    5m34s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 coredns-66bc5c9577-2lv8d                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m18s
	  kube-system                 etcd-functional-074768                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m24s
	  kube-system                 kube-apiserver-functional-074768              250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m39s
	  kube-system                 kube-controller-manager-functional-074768     200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m25s
	  kube-system                 kube-proxy-lp2k8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m19s
	  kube-system                 kube-scheduler-functional-074768              100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m24s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m16s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-vfwcs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m12s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-7xqm9         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m16s                  kube-proxy       
	  Normal  Starting                 6m38s                  kube-proxy       
	  Normal  Starting                 7m12s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m30s (x8 over 8m30s)  kubelet          Node functional-074768 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m30s (x8 over 8m30s)  kubelet          Node functional-074768 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m30s (x7 over 8m30s)  kubelet          Node functional-074768 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m24s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m23s                  kubelet          Node functional-074768 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    8m23s                  kubelet          Node functional-074768 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m23s                  kubelet          Node functional-074768 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  8m23s                  kubelet          Node functional-074768 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           8m20s                  node-controller  Node functional-074768 event: Registered Node functional-074768 in Controller
	  Normal  Starting                 7m19s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    7m18s (x8 over 7m18s)  kubelet          Node functional-074768 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m18s (x7 over 7m18s)  kubelet          Node functional-074768 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  7m18s (x8 over 7m18s)  kubelet          Node functional-074768 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           7m11s                  node-controller  Node functional-074768 event: Registered Node functional-074768 in Controller
	  Normal  Starting                 6m43s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m43s (x8 over 6m43s)  kubelet          Node functional-074768 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m43s (x8 over 6m43s)  kubelet          Node functional-074768 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m43s (x7 over 6m43s)  kubelet          Node functional-074768 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m36s                  node-controller  Node functional-074768 event: Registered Node functional-074768 in Controller
	
	
	==> dmesg <==
	[Oct27 19:09] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000050] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.009365] (rpcbind)[117]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.178192] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000026] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000004] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.089795] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.107366] kauditd_printk_skb: 130 callbacks suppressed
	[Oct27 19:10] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.009137] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.877352] kauditd_printk_skb: 249 callbacks suppressed
	[ +30.932062] kauditd_printk_skb: 38 callbacks suppressed
	[Oct27 19:11] kauditd_printk_skb: 350 callbacks suppressed
	[  +4.462417] kauditd_printk_skb: 81 callbacks suppressed
	[  +6.561811] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.109963] kauditd_printk_skb: 24 callbacks suppressed
	[  +0.392463] kauditd_printk_skb: 303 callbacks suppressed
	[  +1.946868] kauditd_printk_skb: 108 callbacks suppressed
	[Oct27 19:12] kauditd_printk_skb: 8 callbacks suppressed
	[  +1.008563] kauditd_printk_skb: 91 callbacks suppressed
	[  +0.000147] kauditd_printk_skb: 152 callbacks suppressed
	[ +19.538164] kauditd_printk_skb: 26 callbacks suppressed
	[  +3.845026] kauditd_printk_skb: 31 callbacks suppressed
	[Oct27 19:13] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [8f6a57ec258cde7cb5f391bf83a85ba008e698c156047225611c8eab6c6babc0] <==
	{"level":"warn","ts":"2025-10-27T19:11:13.796751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:13.803213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:13.820989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:13.827139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:13.850127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:13.864610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:13.965616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51752","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-27T19:11:34.886131Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-27T19:11:34.886250Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-074768","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.117:2380"],"advertise-client-urls":["https://192.168.39.117:2379"]}
	{"level":"error","ts":"2025-10-27T19:11:34.886308Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-27T19:11:34.969542Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-27T19:11:34.969623Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T19:11:34.969765Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d85ef093c7464643","current-leader-member-id":"d85ef093c7464643"}
	{"level":"info","ts":"2025-10-27T19:11:34.969860Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-27T19:11:34.969869Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-27T19:11:34.969972Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-27T19:11:34.970059Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-27T19:11:34.970068Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-27T19:11:34.970117Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.117:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-27T19:11:34.970124Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.117:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-27T19:11:34.970130Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.117:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T19:11:34.974242Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.117:2380"}
	{"level":"error","ts":"2025-10-27T19:11:34.974325Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.117:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T19:11:34.974349Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.117:2380"}
	{"level":"info","ts":"2025-10-27T19:11:34.974355Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-074768","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.117:2380"],"advertise-client-urls":["https://192.168.39.117:2379"]}
	
	
	==> etcd [c257ffa5d58fa9fcc17ef7974358549bf09938dff7426d6a957c613b8071e622] <==
	{"level":"warn","ts":"2025-10-27T19:11:48.670124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.696928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.713759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.731621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.774630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.777064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.803070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.831051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.844794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.869011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.906600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.925550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.942908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.960546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.988468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.001773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.013937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.028896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.043608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.055856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.067861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.085770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.088921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.104882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.204360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60912","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:18:29 up 8 min,  0 users,  load average: 0.76, 0.66, 0.36
	Linux functional-074768 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [610b55ad8d57b43b4f9f14aca129a7c3f93464db3fb0040f0869e97828d3942e] <==
	I1027 19:11:49.997074       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 19:11:49.997079       1 cache.go:39] Caches are synced for autoregister controller
	I1027 19:11:49.997909       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 19:11:50.006465       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 19:11:50.020163       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1027 19:11:50.020385       1 policy_source.go:240] refreshing policies
	I1027 19:11:50.029748       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:11:50.038132       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 19:11:50.043717       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1027 19:11:50.135960       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 19:11:50.786831       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 19:11:51.391278       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 19:11:51.442935       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1027 19:11:51.470900       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 19:11:51.478456       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 19:11:53.405340       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 19:11:53.556350       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 19:11:56.001360       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 19:12:10.771036       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.125.173"}
	I1027 19:12:16.293068       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.108.218.230"}
	I1027 19:12:17.549055       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 19:12:18.028917       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.109.120"}
	I1027 19:12:18.065792       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.113.167"}
	I1027 19:12:55.519802       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.101.18.82"}
	I1027 19:17:20.567427       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.110.206.225"}
	
	
	==> kube-controller-manager [1ba435049ad20a4af85b2f2d201801a5da9edf37f9f61b27284ccb0c798fbbaf] <==
	I1027 19:11:53.315333       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1027 19:11:53.315385       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1027 19:11:53.315342       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1027 19:11:53.316605       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 19:11:53.318987       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 19:11:53.322538       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 19:11:53.326745       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 19:11:53.327969       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 19:11:53.330177       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 19:11:53.338701       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1027 19:11:53.344965       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:11:53.347598       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 19:11:53.352566       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 19:11:53.352728       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 19:11:53.352755       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 19:11:53.361178       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 19:11:53.366609       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	E1027 19:12:17.675984       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:12:17.699011       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:12:17.727301       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:12:17.729874       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:12:17.738567       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:12:17.749280       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:12:17.760742       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:12:17.769967       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [60dc886cc82b05339032a646f43bb214cf04fc6adf15adc4e8500e3ad3d16a78] <==
	I1027 19:11:18.053781       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1027 19:11:18.053872       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 19:11:18.053945       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 19:11:18.054071       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1027 19:11:18.054160       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 19:11:18.054285       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 19:11:18.055457       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 19:11:18.055619       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1027 19:11:18.056424       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 19:11:18.056636       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:11:18.056725       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 19:11:18.056740       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 19:11:18.065223       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 19:11:18.065380       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 19:11:18.065450       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-074768"
	I1027 19:11:18.065513       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1027 19:11:18.067483       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:11:18.067852       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1027 19:11:18.068636       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 19:11:18.071317       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 19:11:18.075240       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 19:11:18.078873       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1027 19:11:18.082257       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 19:11:18.095993       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 19:11:18.103590       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [5e03d8878eeba1200689ad3ffc75889ca6da8adb3ca6422abe3c15b802418bfe] <==
	I1027 19:11:50.619204       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 19:11:50.719751       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 19:11:50.719779       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.117"]
	E1027 19:11:50.719851       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 19:11:50.759219       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1027 19:11:50.759283       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1027 19:11:50.759314       1 server_linux.go:132] "Using iptables Proxier"
	I1027 19:11:50.769903       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 19:11:50.770217       1 server.go:527] "Version info" version="v1.34.1"
	I1027 19:11:50.770431       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:11:50.775267       1 config.go:200] "Starting service config controller"
	I1027 19:11:50.775302       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 19:11:50.775316       1 config.go:106] "Starting endpoint slice config controller"
	I1027 19:11:50.775319       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 19:11:50.775912       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 19:11:50.775940       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 19:11:50.781346       1 config.go:309] "Starting node config controller"
	I1027 19:11:50.785856       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 19:11:50.786257       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 19:11:50.876443       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 19:11:50.876173       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 19:11:50.876931       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [a05ec0c93cfd736d12c8313747186f27db2021dc9a371d8d3a4438f992549069] <==
	I1027 19:11:44.169183       1 server_linux.go:53] "Using iptables proxy"
	I1027 19:11:44.246720       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1027 19:11:44.249261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-074768&limit=500&resourceVersion=0\": dial tcp 192.168.39.117:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [3ad24ef975b266d69bc17f4c4acc398bd352926f4e8a86e50c84e2a2bcd7f3c5] <==
	I1027 19:11:48.514616       1 serving.go:386] Generated self-signed cert in-memory
	W1027 19:11:49.895123       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 19:11:49.895488       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 19:11:49.895519       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 19:11:49.895728       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 19:11:49.951216       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 19:11:49.952748       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:11:49.964352       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:11:49.964482       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:11:49.964562       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 19:11:49.964647       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 19:11:50.065435       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [fa2d3b5fc751c052fe689b7cd2e4fd4d601c169b76aa924543645ff13d971195] <==
	I1027 19:11:12.865255       1 serving.go:386] Generated self-signed cert in-memory
	W1027 19:11:14.587934       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 19:11:14.588023       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 19:11:14.588034       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 19:11:14.588040       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 19:11:14.713266       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 19:11:14.713415       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:11:14.715890       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:11:14.715936       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:11:14.716109       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 19:11:14.716175       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 19:11:14.817022       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:11:34.878344       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:11:34.880247       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1027 19:11:34.880379       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1027 19:11:34.880567       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1027 19:11:34.882011       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1027 19:11:34.882253       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 27 19:17:40 functional-074768 kubelet[6302]: E1027 19:17:40.935337    6302 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Oct 27 19:17:40 functional-074768 kubelet[6302]: E1027 19:17:40.935463    6302 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Oct 27 19:17:40 functional-074768 kubelet[6302]: E1027 19:17:40.935739    6302 kuberuntime_manager.go:1449] "Unhandled Error" err="container mysql start failed in pod mysql-5bb876957f-zrxgm_default(3384566f-1f7b-49e8-b729-a97f0e0924c2): ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 27 19:17:40 functional-074768 kubelet[6302]: E1027 19:17:40.935774    6302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-zrxgm" podUID="3384566f-1f7b-49e8-b729-a97f0e0924c2"
	Oct 27 19:17:46 functional-074768 kubelet[6302]: E1027 19:17:46.263600    6302 manager.go:1116] Failed to create existing container: /kubepods/burstable/podd75616c8b2e6db9ba925e56dac14f36d/crio-325376ebe73d19cdba4b91a0c3b50e525bc1fe4cedb8425acabd64180bb74da3: Error finding container 325376ebe73d19cdba4b91a0c3b50e525bc1fe4cedb8425acabd64180bb74da3: Status 404 returned error can't find the container with id 325376ebe73d19cdba4b91a0c3b50e525bc1fe4cedb8425acabd64180bb74da3
	Oct 27 19:17:46 functional-074768 kubelet[6302]: E1027 19:17:46.264306    6302 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod84d9d1d3cdf44b76588eee6ed2c2ed23/crio-af345d07331bc89836a28c1110a92060b79b70e6fc12a33f95e0f664b95113e3: Error finding container af345d07331bc89836a28c1110a92060b79b70e6fc12a33f95e0f664b95113e3: Status 404 returned error can't find the container with id af345d07331bc89836a28c1110a92060b79b70e6fc12a33f95e0f664b95113e3
	Oct 27 19:17:46 functional-074768 kubelet[6302]: E1027 19:17:46.264694    6302 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod1419f6f2dbf7cdc64e36c7697d572358/crio-c1e8123b0e1087e5aab8a8a7c7bcef0af0672ff02d81366f3f783974115ad5e4: Error finding container c1e8123b0e1087e5aab8a8a7c7bcef0af0672ff02d81366f3f783974115ad5e4: Status 404 returned error can't find the container with id c1e8123b0e1087e5aab8a8a7c7bcef0af0672ff02d81366f3f783974115ad5e4
	Oct 27 19:17:46 functional-074768 kubelet[6302]: E1027 19:17:46.265029    6302 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod8ca78600-3d29-4edc-9a1c-572cf646e83e/crio-f27f5919b6af890342c38a8769648e9c4a65ce64cb55dea1cf0342776751bca4: Error finding container f27f5919b6af890342c38a8769648e9c4a65ce64cb55dea1cf0342776751bca4: Status 404 returned error can't find the container with id f27f5919b6af890342c38a8769648e9c4a65ce64cb55dea1cf0342776751bca4
	Oct 27 19:17:46 functional-074768 kubelet[6302]: E1027 19:17:46.265428    6302 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod5f607e8f-f4a5-475f-8bdb-d9c2889d5ada/crio-a4731d6d18ca9bedf0f80e8d150c0dd35d01d2a54d12375e767efe9ab4d14ac1: Error finding container a4731d6d18ca9bedf0f80e8d150c0dd35d01d2a54d12375e767efe9ab4d14ac1: Status 404 returned error can't find the container with id a4731d6d18ca9bedf0f80e8d150c0dd35d01d2a54d12375e767efe9ab4d14ac1
	Oct 27 19:17:46 functional-074768 kubelet[6302]: E1027 19:17:46.371201    6302 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761592666370562423  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Oct 27 19:17:46 functional-074768 kubelet[6302]: E1027 19:17:46.371268    6302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761592666370562423  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Oct 27 19:17:55 functional-074768 kubelet[6302]: E1027 19:17:55.139510    6302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-zrxgm" podUID="3384566f-1f7b-49e8-b729-a97f0e0924c2"
	Oct 27 19:17:56 functional-074768 kubelet[6302]: E1027 19:17:56.374959    6302 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761592676372861707  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Oct 27 19:17:56 functional-074768 kubelet[6302]: E1027 19:17:56.375004    6302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761592676372861707  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Oct 27 19:18:06 functional-074768 kubelet[6302]: E1027 19:18:06.377972    6302 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761592686376308010  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Oct 27 19:18:06 functional-074768 kubelet[6302]: E1027 19:18:06.377998    6302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761592686376308010  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Oct 27 19:18:11 functional-074768 kubelet[6302]: E1027 19:18:11.037313    6302 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Oct 27 19:18:11 functional-074768 kubelet[6302]: E1027 19:18:11.037374    6302 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Oct 27 19:18:11 functional-074768 kubelet[6302]: E1027 19:18:11.037623    6302 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-7d85dfc575-ppg9q_default(66fec0d9-6763-4ac7-be30-631c20dcc46e): ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 27 19:18:11 functional-074768 kubelet[6302]: E1027 19:18:11.037711    6302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-ppg9q" podUID="66fec0d9-6763-4ac7-be30-631c20dcc46e"
	Oct 27 19:18:16 functional-074768 kubelet[6302]: E1027 19:18:16.380927    6302 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761592696380490634  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Oct 27 19:18:16 functional-074768 kubelet[6302]: E1027 19:18:16.380948    6302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761592696380490634  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Oct 27 19:18:22 functional-074768 kubelet[6302]: E1027 19:18:22.138221    6302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-ppg9q" podUID="66fec0d9-6763-4ac7-be30-631c20dcc46e"
	Oct 27 19:18:26 functional-074768 kubelet[6302]: E1027 19:18:26.384877    6302 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761592706383486380  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Oct 27 19:18:26 functional-074768 kubelet[6302]: E1027 19:18:26.385133    6302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761592706383486380  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	
	
	==> storage-provisioner [ba777ad788f45b33f81f28d7aecebc6c59953bb838ed7f23c33832f2534a5179] <==
	I1027 19:11:15.295791       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 19:11:15.304293       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 19:11:15.304711       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1027 19:11:15.307781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:11:18.764063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:11:23.031174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:11:26.630899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:11:29.685570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:11:32.711342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:11:32.720392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 19:11:32.720513       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 19:11:32.721774       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-074768_b4323ab1-ff60-4e7e-907d-d6d67bc9f70a!
	I1027 19:11:32.722233       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"da7c4387-b27a-4f7e-ae17-d5eda90d8a7d", APIVersion:"v1", ResourceVersion:"574", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-074768_b4323ab1-ff60-4e7e-907d-d6d67bc9f70a became leader
	W1027 19:11:32.734342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:11:32.743127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 19:11:32.822877       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-074768_b4323ab1-ff60-4e7e-907d-d6d67bc9f70a!
	W1027 19:11:34.746637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:11:34.755281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d7b0eb17be9e97894e47220b4bbb2efa859e1155cbe7cff15e03d68b41c28284] <==
	W1027 19:18:04.008519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:18:06.012030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:18:06.018142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:18:08.021522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:18:08.026953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:18:10.030037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:18:10.040282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:18:12.043700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:18:12.053640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:18:14.057790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:18:14.063765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:18:16.068031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:18:16.073976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:18:18.077783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:18:18.087134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:18:20.092054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:18:20.097263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:18:22.102045       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:18:22.107604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:18:24.112866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:18:24.121500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:18:26.125462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:18:26.132433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:18:28.136061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:18:28.145187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-074768 -n functional-074768
helpers_test.go:269: (dbg) Run:  kubectl --context functional-074768 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-kbc8s hello-node-connect-7d85dfc575-ppg9q mysql-5bb876957f-zrxgm sp-pod dashboard-metrics-scraper-77bf4d6c4c-vfwcs kubernetes-dashboard-855c9754f9-7xqm9
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-074768 describe pod busybox-mount hello-node-75c85bcc94-kbc8s hello-node-connect-7d85dfc575-ppg9q mysql-5bb876957f-zrxgm sp-pod dashboard-metrics-scraper-77bf4d6c4c-vfwcs kubernetes-dashboard-855c9754f9-7xqm9
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-074768 describe pod busybox-mount hello-node-75c85bcc94-kbc8s hello-node-connect-7d85dfc575-ppg9q mysql-5bb876957f-zrxgm sp-pod dashboard-metrics-scraper-77bf4d6c4c-vfwcs kubernetes-dashboard-855c9754f9-7xqm9: exit status 1 (113.015302ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-074768/192.168.39.117
	Start Time:       Mon, 27 Oct 2025 19:12:16 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://b1324210b99e8b9f9c9065aa41dc3791c8db409d8aa5ab8f236b2399ccb7bdb7
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 27 Oct 2025 19:12:49 +0000
	      Finished:     Mon, 27 Oct 2025 19:12:49 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zc6qh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-zc6qh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  6m14s  default-scheduler  Successfully assigned default/busybox-mount to functional-074768
	  Normal  Pulling    6m13s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m41s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.582s (32.731s including waiting). Image size: 4631262 bytes.
	  Normal  Created    5m41s  kubelet            Created container: mount-munger
	  Normal  Started    5m41s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-kbc8s
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-074768/192.168.39.117
	Start Time:       Mon, 27 Oct 2025 19:17:20 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8l6jd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8l6jd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  70s   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-kbc8s to functional-074768
	  Normal  Pulling    69s   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-ppg9q
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-074768/192.168.39.117
	Start Time:       Mon, 27 Oct 2025 19:12:16 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t7d4k (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-t7d4k:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m14s                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-ppg9q to functional-074768
	  Warning  Failed     5m43s                  kubelet            Failed to pull image "kicbase/echo-server": copying system image from manifest list: determining manifest MIME type for docker://kicbase/echo-server:latest: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m36s (x3 over 6m14s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     19s (x3 over 5m43s)    kubelet            Error: ErrImagePull
	  Warning  Failed     19s (x2 over 3m4s)     kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    8s (x3 over 5m43s)     kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     8s (x3 over 5m43s)     kubelet            Error: ImagePullBackOff
	
	
	Name:             mysql-5bb876957f-zrxgm
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-074768/192.168.39.117
	Start Time:       Mon, 27 Oct 2025 19:12:55 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:           10.244.0.12
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rxvwx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rxvwx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  5m35s                default-scheduler  Successfully assigned default/mysql-5bb876957f-zrxgm to functional-074768
	  Warning  Failed     50s (x2 over 3m34s)  kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     50s (x2 over 3m34s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    35s (x2 over 3m33s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     35s (x2 over 3m33s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    21s (x3 over 5m34s)  kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-074768/192.168.39.117
	Start Time:       Mon, 27 Oct 2025 19:12:27 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gmtcg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-gmtcg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m3s                 default-scheduler  Successfully assigned default/sp-pod to functional-074768
	  Warning  Failed     94s (x2 over 4m10s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     94s (x2 over 4m10s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    80s (x2 over 4m9s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     80s (x2 over 4m9s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    67s (x3 over 6m2s)   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-vfwcs" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-7xqm9" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-074768 describe pod busybox-mount hello-node-75c85bcc94-kbc8s hello-node-connect-7d85dfc575-ppg9q mysql-5bb876957f-zrxgm sp-pod dashboard-metrics-scraper-77bf4d6c4c-vfwcs kubernetes-dashboard-855c9754f9-7xqm9: exit status 1
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (367.90s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-074768 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-zrxgm" [3384566f-1f7b-49e8-b729-a97f0e0924c2] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
E1027 19:13:46.121289   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:14:13.834823   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1804: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-074768 -n functional-074768
functional_test.go:1804: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-10-27 19:22:55.799891322 +0000 UTC m=+1603.724433422
functional_test.go:1804: (dbg) Run:  kubectl --context functional-074768 describe po mysql-5bb876957f-zrxgm -n default
functional_test.go:1804: (dbg) kubectl --context functional-074768 describe po mysql-5bb876957f-zrxgm -n default:
Name:             mysql-5bb876957f-zrxgm
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-074768/192.168.39.117
Start Time:       Mon, 27 Oct 2025 19:12:55 +0000
Labels:           app=mysql
pod-template-hash=5bb876957f
Annotations:      <none>
Status:           Pending
IP:               10.244.0.12
IPs:
IP:           10.244.0.12
Controlled By:  ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP (mysql)
Host Port:      0/TCP (mysql)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rxvwx (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-rxvwx:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-5bb876957f-zrxgm to functional-074768
Warning  Failed     5m15s (x2 over 7m59s)  kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m (x3 over 7m59s)     kubelet            Error: ErrImagePull
Warning  Failed     2m                     kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    81s (x5 over 7m58s)    kubelet            Back-off pulling image "docker.io/mysql:5.7"
Warning  Failed     81s (x5 over 7m58s)    kubelet            Error: ImagePullBackOff
Normal   Pulling    70s (x4 over 9m59s)    kubelet            Pulling image "docker.io/mysql:5.7"
functional_test.go:1804: (dbg) Run:  kubectl --context functional-074768 logs mysql-5bb876957f-zrxgm -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-074768 logs mysql-5bb876957f-zrxgm -n default: exit status 1 (63.619854ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-5bb876957f-zrxgm" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-074768 logs mysql-5bb876957f-zrxgm -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-074768 -n functional-074768
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-074768 logs -n 25: (1.500696513s)
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons         │ functional-074768 addons list                                                                                              │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │ 27 Oct 25 19:12 UTC │
	│ addons         │ functional-074768 addons list -o json                                                                                      │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:12 UTC │ 27 Oct 25 19:12 UTC │
	│ ssh            │ functional-074768 ssh sudo cat /etc/ssl/certs/62705.pem                                                                    │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ ssh            │ functional-074768 ssh sudo cat /usr/share/ca-certificates/62705.pem                                                        │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ ssh            │ functional-074768 ssh sudo cat /etc/ssl/certs/51391683.0                                                                   │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ ssh            │ functional-074768 ssh sudo cat /etc/ssl/certs/627052.pem                                                                   │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ ssh            │ functional-074768 ssh sudo cat /usr/share/ca-certificates/627052.pem                                                       │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ ssh            │ functional-074768 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                   │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ ssh            │ functional-074768 ssh sudo cat /etc/test/nested/copy/62705/hosts                                                           │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ cp             │ functional-074768 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ ssh            │ functional-074768 ssh -n functional-074768 sudo cat /home/docker/cp-test.txt                                               │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ cp             │ functional-074768 cp functional-074768:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3501193272/001/cp-test.txt │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ ssh            │ functional-074768 ssh -n functional-074768 sudo cat /home/docker/cp-test.txt                                               │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ cp             │ functional-074768 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ ssh            │ functional-074768 ssh -n functional-074768 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ update-context │ functional-074768 update-context --alsologtostderr -v=2                                                                    │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:18 UTC │ 27 Oct 25 19:18 UTC │
	│ update-context │ functional-074768 update-context --alsologtostderr -v=2                                                                    │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:18 UTC │ 27 Oct 25 19:18 UTC │
	│ update-context │ functional-074768 update-context --alsologtostderr -v=2                                                                    │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:18 UTC │ 27 Oct 25 19:18 UTC │
	│ image          │ functional-074768 image ls --format short --alsologtostderr                                                                │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:18 UTC │ 27 Oct 25 19:18 UTC │
	│ image          │ functional-074768 image ls --format yaml --alsologtostderr                                                                 │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:18 UTC │ 27 Oct 25 19:18 UTC │
	│ ssh            │ functional-074768 ssh pgrep buildkitd                                                                                      │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:18 UTC │                     │
	│ image          │ functional-074768 image build -t localhost/my-image:functional-074768 testdata/build --alsologtostderr                     │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:18 UTC │ 27 Oct 25 19:18 UTC │
	│ image          │ functional-074768 image ls                                                                                                 │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:18 UTC │ 27 Oct 25 19:18 UTC │
	│ image          │ functional-074768 image ls --format json --alsologtostderr                                                                 │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:18 UTC │ 27 Oct 25 19:18 UTC │
	│ image          │ functional-074768 image ls --format table --alsologtostderr                                                                │ functional-074768 │ jenkins │ v1.37.0 │ 27 Oct 25 19:18 UTC │ 27 Oct 25 19:18 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 19:12:15
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 19:12:15.978537   69010 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:12:15.978805   69010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:12:15.978815   69010 out.go:374] Setting ErrFile to fd 2...
	I1027 19:12:15.978819   69010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:12:15.979054   69010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-58821/.minikube/bin
	I1027 19:12:15.979492   69010 out.go:368] Setting JSON to false
	I1027 19:12:15.980424   69010 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6886,"bootTime":1761585450,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 19:12:15.980516   69010 start.go:141] virtualization: kvm guest
	I1027 19:12:15.985372   69010 out.go:179] * [functional-074768] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 19:12:15.986974   69010 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:12:15.986983   69010 notify.go:220] Checking for updates...
	I1027 19:12:15.988280   69010 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:12:15.989521   69010 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-58821/kubeconfig
	I1027 19:12:15.990743   69010 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-58821/.minikube
	I1027 19:12:15.991901   69010 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 19:12:15.993094   69010 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:12:15.994971   69010 config.go:182] Loaded profile config "functional-074768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:12:15.995574   69010 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:12:16.035343   69010 out.go:179] * Using the kvm2 driver based on existing profile
	I1027 19:12:16.036772   69010 start.go:305] selected driver: kvm2
	I1027 19:12:16.036792   69010 start.go:925] validating driver "kvm2" against &{Name:functional-074768 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-074768 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:12:16.036933   69010 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:12:16.038400   69010 cni.go:84] Creating CNI manager for ""
	I1027 19:12:16.038475   69010 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 19:12:16.038550   69010 start.go:349] cluster config:
	{Name:functional-074768 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-074768 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:12:16.040073   69010 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 27 19:22:56 functional-074768 crio[5469]: time="2025-10-27 19:22:56.613560056Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761592976613534156,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201237,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=547f554d-0312-4d36-9fab-58852ccfa897 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:22:56 functional-074768 crio[5469]: time="2025-10-27 19:22:56.614132123Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd2c61f0-e15d-44bf-9e68-ab9bec839ac9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:22:56 functional-074768 crio[5469]: time="2025-10-27 19:22:56.614185463Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd2c61f0-e15d-44bf-9e68-ab9bec839ac9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:22:56 functional-074768 crio[5469]: time="2025-10-27 19:22:56.614506047Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1324210b99e8b9f9c9065aa41dc3791c8db409d8aa5ab8f236b2399ccb7bdb7,PodSandboxId:542a950a6c94060e92400d067bc90eb44972884fe88539e14c8e6369202899ab,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761592369891065713,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c84388b6-2d7c-40a2-b560-fd225b55349a,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e03d8878eeba1200689ad3ffc75889ca6da8adb3ca6422abe3c15b802418bfe,PodSandboxId:4f2d4092ef8ce6f2ecebbd65772d6c0e4c75becabf6c5c71e3a7ab8a6578edd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761592310407165550,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp2k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538c55aa-9e90-4bb3-83b7-f84cce86edca,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b0eb17be9e97894e47220b4bbb2efa859e1155cbe7cff15e03d68b41c28284,PodSandboxId:f6ffd9e1d08daec9d8774fc7454fb6003f75ac9cc21e418870e28db77f9bf746,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761592310403081749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ca78600-3d29-4edc-9a1c-572cf646e83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:610b55ad8d57b43b4f9f14aca129a7c3f93464db3fb0040f0869e97828d3942e,PodSandboxId:ea9b6a12a7c9a66ed0d575bd7da37ec71c0a5c995cb086a4d164a6214f32b7a4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761592307034057590,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c94e58bf276ae150ba3be616b5d9315d,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba435049ad20a4af85b2f2d201801a5da9edf37f9f61b27284ccb0c798fbbaf,PodSandboxId:3da3325cd282ffb88aa1eed45cb1758e8dd5886923e0537475bf8b2062a708f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761592306825227460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d75616c8b2e6db9ba925e56dac14f36d,},Annotations:map[string]string{io.kubern
etes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad24ef975b266d69bc17f4c4acc398bd352926f4e8a86e50c84e2a2bcd7f3c5,PodSandboxId:62e2964894758c1aaed419fd373fdedd326f471efed3168345a8f2e5c1a18256,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761592306781617058,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074768,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 84d9d1d3cdf44b76588eee6ed2c2ed23,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c257ffa5d58fa9fcc17ef7974358549bf09938dff7426d6a957c613b8071e622,PodSandboxId:44c1dc33c037b1964e803b03fbe87ef48ced6c62d21ce6bbe99abb10c7d606bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761592306763270982,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1419f6f2dbf7cdc64e36c7697d572358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b855533e3a4aa01919810611cc1998e2734d17f27ed9d4381f184b8ae2c789e,PodSandboxId:c0d55bbc8d1fa75dc2bac99e9e6c5070f6a924127df831c2e344b53b4d4dcf73,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,Crea
tedAt:1761592304383828785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lv8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a05ec0c93cfd736d12c8313747186f27db2021dc9a371d8d3a4438f992549069,PodSandboxId:4f2d4092ef8ce6f2ecebbd65772d6c0e4c7
5becabf6c5c71e3a7ab8a6578edd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761592303924560683,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp2k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538c55aa-9e90-4bb3-83b7-f84cce86edca,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba777ad788f45b33f81f28d7aecebc6c59953bb838ed7f23c33832f2534a5179,PodSandboxId:f27f5919b6af890342c38a8769648e9c4a65ce64cb55dea1cf0342776751bca4,Meta
data:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761592275224568234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ca78600-3d29-4edc-9a1c-572cf646e83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6a57ec258cde7cb5f391bf83a85ba008e698c156047225611c8eab6c6babc0,PodSandboxId:c1e8123b0e1087e5aab8a8a7c7bcef0af0672ff02d81366f3f783974115ad5e4,Metadata:&Contain
erMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761592271488084282,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1419f6f2dbf7cdc64e36c7697d572358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa2d3b5fc751c052fe689b7cd2e4fd4d601c169b76aa924543645ff13d971195,PodSandboxId:af345
d07331bc89836a28c1110a92060b79b70e6fc12a33f95e0f664b95113e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761592271427253610,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84d9d1d3cdf44b76588eee6ed2c2ed23,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:60dc886cc82b05339032a646f43bb214cf04fc6adf15adc4e8500e3ad3d16a78,PodSandboxId:325376ebe73d19cdba4b91a0c3b50e525bc1fe4cedb8425acabd64180bb74da3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761592271436590190,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d75616c8b2e6db9ba925e56dac14f36d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b88ac5c8b637643251fc39d80e23070b4065a79ea8af3e53205465bf22ed227c,PodSandboxId:a4731d6d18ca9bedf0f80e8d150c0dd35d01d2a54d12375e767efe9ab4d14ac1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761592267049795999,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lv8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd2c61f0-e15d-44bf-9e68-ab9bec839ac9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:22:56 functional-074768 crio[5469]: time="2025-10-27 19:22:56.661048927Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ce0514b4-c452-44f9-93ce-02ef1e5fac1e name=/runtime.v1.RuntimeService/Version
	Oct 27 19:22:56 functional-074768 crio[5469]: time="2025-10-27 19:22:56.661136759Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ce0514b4-c452-44f9-93ce-02ef1e5fac1e name=/runtime.v1.RuntimeService/Version
	Oct 27 19:22:56 functional-074768 crio[5469]: time="2025-10-27 19:22:56.662807212Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c63afc5c-9179-4ad0-a280-e6d0a5c19d52 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:22:56 functional-074768 crio[5469]: time="2025-10-27 19:22:56.666200342Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761592976666169397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201237,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c63afc5c-9179-4ad0-a280-e6d0a5c19d52 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:22:56 functional-074768 crio[5469]: time="2025-10-27 19:22:56.667456673Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3645bcf8-b338-4a61-86b7-badc0d635d48 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:22:56 functional-074768 crio[5469]: time="2025-10-27 19:22:56.667513143Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3645bcf8-b338-4a61-86b7-badc0d635d48 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:22:56 functional-074768 crio[5469]: time="2025-10-27 19:22:56.668058574Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1324210b99e8b9f9c9065aa41dc3791c8db409d8aa5ab8f236b2399ccb7bdb7,PodSandboxId:542a950a6c94060e92400d067bc90eb44972884fe88539e14c8e6369202899ab,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761592369891065713,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c84388b6-2d7c-40a2-b560-fd225b55349a,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e03d8878eeba1200689ad3ffc75889ca6da8adb3ca6422abe3c15b802418bfe,PodSandboxId:4f2d4092ef8ce6f2ecebbd65772d6c0e4c75becabf6c5c71e3a7ab8a6578edd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761592310407165550,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp2k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538c55aa-9e90-4bb3-83b7-f84cce86edca,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b0eb17be9e97894e47220b4bbb2efa859e1155cbe7cff15e03d68b41c28284,PodSandboxId:f6ffd9e1d08daec9d8774fc7454fb6003f75ac9cc21e418870e28db77f9bf746,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761592310403081749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ca78600-3d29-4edc-9a1c-572cf646e83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:610b55ad8d57b43b4f9f14aca129a7c3f93464db3fb0040f0869e97828d3942e,PodSandboxId:ea9b6a12a7c9a66ed0d575bd7da37ec71c0a5c995cb086a4d164a6214f32b7a4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761592307034057590,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c94e58bf276ae150ba3be616b5d9315d,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba435049ad20a4af85b2f2d201801a5da9edf37f9f61b27284ccb0c798fbbaf,PodSandboxId:3da3325cd282ffb88aa1eed45cb1758e8dd5886923e0537475bf8b2062a708f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761592306825227460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d75616c8b2e6db9ba925e56dac14f36d,},Annotations:map[string]string{io.kubern
etes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad24ef975b266d69bc17f4c4acc398bd352926f4e8a86e50c84e2a2bcd7f3c5,PodSandboxId:62e2964894758c1aaed419fd373fdedd326f471efed3168345a8f2e5c1a18256,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761592306781617058,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074768,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 84d9d1d3cdf44b76588eee6ed2c2ed23,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c257ffa5d58fa9fcc17ef7974358549bf09938dff7426d6a957c613b8071e622,PodSandboxId:44c1dc33c037b1964e803b03fbe87ef48ced6c62d21ce6bbe99abb10c7d606bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761592306763270982,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1419f6f2dbf7cdc64e36c7697d572358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b855533e3a4aa01919810611cc1998e2734d17f27ed9d4381f184b8ae2c789e,PodSandboxId:c0d55bbc8d1fa75dc2bac99e9e6c5070f6a924127df831c2e344b53b4d4dcf73,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,Crea
tedAt:1761592304383828785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lv8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a05ec0c93cfd736d12c8313747186f27db2021dc9a371d8d3a4438f992549069,PodSandboxId:4f2d4092ef8ce6f2ecebbd65772d6c0e4c7
5becabf6c5c71e3a7ab8a6578edd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761592303924560683,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp2k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538c55aa-9e90-4bb3-83b7-f84cce86edca,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba777ad788f45b33f81f28d7aecebc6c59953bb838ed7f23c33832f2534a5179,PodSandboxId:f27f5919b6af890342c38a8769648e9c4a65ce64cb55dea1cf0342776751bca4,Meta
data:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761592275224568234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ca78600-3d29-4edc-9a1c-572cf646e83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6a57ec258cde7cb5f391bf83a85ba008e698c156047225611c8eab6c6babc0,PodSandboxId:c1e8123b0e1087e5aab8a8a7c7bcef0af0672ff02d81366f3f783974115ad5e4,Metadata:&Contain
erMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761592271488084282,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1419f6f2dbf7cdc64e36c7697d572358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa2d3b5fc751c052fe689b7cd2e4fd4d601c169b76aa924543645ff13d971195,PodSandboxId:af345
d07331bc89836a28c1110a92060b79b70e6fc12a33f95e0f664b95113e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761592271427253610,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84d9d1d3cdf44b76588eee6ed2c2ed23,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:60dc886cc82b05339032a646f43bb214cf04fc6adf15adc4e8500e3ad3d16a78,PodSandboxId:325376ebe73d19cdba4b91a0c3b50e525bc1fe4cedb8425acabd64180bb74da3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761592271436590190,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d75616c8b2e6db9ba925e56dac14f36d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b88ac5c8b637643251fc39d80e23070b4065a79ea8af3e53205465bf22ed227c,PodSandboxId:a4731d6d18ca9bedf0f80e8d150c0dd35d01d2a54d12375e767efe9ab4d14ac1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761592267049795999,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lv8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3645bcf8-b338-4a61-86b7-badc0d635d48 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:22:56 functional-074768 crio[5469]: time="2025-10-27 19:22:56.709892265Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5515295a-844c-4a55-9348-c45be620d007 name=/runtime.v1.RuntimeService/Version
	Oct 27 19:22:56 functional-074768 crio[5469]: time="2025-10-27 19:22:56.710114773Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5515295a-844c-4a55-9348-c45be620d007 name=/runtime.v1.RuntimeService/Version
	Oct 27 19:22:56 functional-074768 crio[5469]: time="2025-10-27 19:22:56.711544532Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=77b8dcbe-d352-4634-a1f6-2918579c8f98 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:22:56 functional-074768 crio[5469]: time="2025-10-27 19:22:56.712590616Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761592976712562975,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201237,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=77b8dcbe-d352-4634-a1f6-2918579c8f98 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:22:56 functional-074768 crio[5469]: time="2025-10-27 19:22:56.713370625Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=99715b0b-7891-42a4-aca7-cbe49415e255 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:22:56 functional-074768 crio[5469]: time="2025-10-27 19:22:56.713506195Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=99715b0b-7891-42a4-aca7-cbe49415e255 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:22:56 functional-074768 crio[5469]: time="2025-10-27 19:22:56.713853648Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1324210b99e8b9f9c9065aa41dc3791c8db409d8aa5ab8f236b2399ccb7bdb7,PodSandboxId:542a950a6c94060e92400d067bc90eb44972884fe88539e14c8e6369202899ab,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761592369891065713,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c84388b6-2d7c-40a2-b560-fd225b55349a,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e03d8878eeba1200689ad3ffc75889ca6da8adb3ca6422abe3c15b802418bfe,PodSandboxId:4f2d4092ef8ce6f2ecebbd65772d6c0e4c75becabf6c5c71e3a7ab8a6578edd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761592310407165550,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp2k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538c55aa-9e90-4bb3-83b7-f84cce86edca,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b0eb17be9e97894e47220b4bbb2efa859e1155cbe7cff15e03d68b41c28284,PodSandboxId:f6ffd9e1d08daec9d8774fc7454fb6003f75ac9cc21e418870e28db77f9bf746,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761592310403081749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ca78600-3d29-4edc-9a1c-572cf646e83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:610b55ad8d57b43b4f9f14aca129a7c3f93464db3fb0040f0869e97828d3942e,PodSandboxId:ea9b6a12a7c9a66ed0d575bd7da37ec71c0a5c995cb086a4d164a6214f32b7a4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761592307034057590,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c94e58bf276ae150ba3be616b5d9315d,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba435049ad20a4af85b2f2d201801a5da9edf37f9f61b27284ccb0c798fbbaf,PodSandboxId:3da3325cd282ffb88aa1eed45cb1758e8dd5886923e0537475bf8b2062a708f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761592306825227460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d75616c8b2e6db9ba925e56dac14f36d,},Annotations:map[string]string{io.kubern
etes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad24ef975b266d69bc17f4c4acc398bd352926f4e8a86e50c84e2a2bcd7f3c5,PodSandboxId:62e2964894758c1aaed419fd373fdedd326f471efed3168345a8f2e5c1a18256,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761592306781617058,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074768,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 84d9d1d3cdf44b76588eee6ed2c2ed23,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c257ffa5d58fa9fcc17ef7974358549bf09938dff7426d6a957c613b8071e622,PodSandboxId:44c1dc33c037b1964e803b03fbe87ef48ced6c62d21ce6bbe99abb10c7d606bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761592306763270982,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1419f6f2dbf7cdc64e36c7697d572358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b855533e3a4aa01919810611cc1998e2734d17f27ed9d4381f184b8ae2c789e,PodSandboxId:c0d55bbc8d1fa75dc2bac99e9e6c5070f6a924127df831c2e344b53b4d4dcf73,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,Crea
tedAt:1761592304383828785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lv8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a05ec0c93cfd736d12c8313747186f27db2021dc9a371d8d3a4438f992549069,PodSandboxId:4f2d4092ef8ce6f2ecebbd65772d6c0e4c7
5becabf6c5c71e3a7ab8a6578edd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761592303924560683,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp2k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538c55aa-9e90-4bb3-83b7-f84cce86edca,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba777ad788f45b33f81f28d7aecebc6c59953bb838ed7f23c33832f2534a5179,PodSandboxId:f27f5919b6af890342c38a8769648e9c4a65ce64cb55dea1cf0342776751bca4,Meta
data:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761592275224568234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ca78600-3d29-4edc-9a1c-572cf646e83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6a57ec258cde7cb5f391bf83a85ba008e698c156047225611c8eab6c6babc0,PodSandboxId:c1e8123b0e1087e5aab8a8a7c7bcef0af0672ff02d81366f3f783974115ad5e4,Metadata:&Contain
erMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761592271488084282,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1419f6f2dbf7cdc64e36c7697d572358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa2d3b5fc751c052fe689b7cd2e4fd4d601c169b76aa924543645ff13d971195,PodSandboxId:af345
d07331bc89836a28c1110a92060b79b70e6fc12a33f95e0f664b95113e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761592271427253610,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84d9d1d3cdf44b76588eee6ed2c2ed23,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:60dc886cc82b05339032a646f43bb214cf04fc6adf15adc4e8500e3ad3d16a78,PodSandboxId:325376ebe73d19cdba4b91a0c3b50e525bc1fe4cedb8425acabd64180bb74da3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761592271436590190,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d75616c8b2e6db9ba925e56dac14f36d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b88ac5c8b637643251fc39d80e23070b4065a79ea8af3e53205465bf22ed227c,PodSandboxId:a4731d6d18ca9bedf0f80e8d150c0dd35d01d2a54d12375e767efe9ab4d14ac1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761592267049795999,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lv8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=99715b0b-7891-42a4-aca7-cbe49415e255 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:22:56 functional-074768 crio[5469]: time="2025-10-27 19:22:56.750611408Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=157eb2a9-7495-4ae7-875c-52f08ddd9950 name=/runtime.v1.RuntimeService/Version
	Oct 27 19:22:56 functional-074768 crio[5469]: time="2025-10-27 19:22:56.750783589Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=157eb2a9-7495-4ae7-875c-52f08ddd9950 name=/runtime.v1.RuntimeService/Version
	Oct 27 19:22:56 functional-074768 crio[5469]: time="2025-10-27 19:22:56.752192110Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=85fe3171-8cec-44aa-bac7-a7646cd1d2e0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:22:56 functional-074768 crio[5469]: time="2025-10-27 19:22:56.752932774Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761592976752907777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201237,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=85fe3171-8cec-44aa-bac7-a7646cd1d2e0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 19:22:56 functional-074768 crio[5469]: time="2025-10-27 19:22:56.753897942Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=36ba35cc-4196-4d3b-b66e-35d3646847a5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:22:56 functional-074768 crio[5469]: time="2025-10-27 19:22:56.754017990Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=36ba35cc-4196-4d3b-b66e-35d3646847a5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 19:22:56 functional-074768 crio[5469]: time="2025-10-27 19:22:56.754327126Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1324210b99e8b9f9c9065aa41dc3791c8db409d8aa5ab8f236b2399ccb7bdb7,PodSandboxId:542a950a6c94060e92400d067bc90eb44972884fe88539e14c8e6369202899ab,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761592369891065713,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c84388b6-2d7c-40a2-b560-fd225b55349a,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e03d8878eeba1200689ad3ffc75889ca6da8adb3ca6422abe3c15b802418bfe,PodSandboxId:4f2d4092ef8ce6f2ecebbd65772d6c0e4c75becabf6c5c71e3a7ab8a6578edd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761592310407165550,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp2k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538c55aa-9e90-4bb3-83b7-f84cce86edca,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b0eb17be9e97894e47220b4bbb2efa859e1155cbe7cff15e03d68b41c28284,PodSandboxId:f6ffd9e1d08daec9d8774fc7454fb6003f75ac9cc21e418870e28db77f9bf746,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761592310403081749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ca78600-3d29-4edc-9a1c-572cf646e83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:610b55ad8d57b43b4f9f14aca129a7c3f93464db3fb0040f0869e97828d3942e,PodSandboxId:ea9b6a12a7c9a66ed0d575bd7da37ec71c0a5c995cb086a4d164a6214f32b7a4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761592307034057590,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c94e58bf276ae150ba3be616b5d9315d,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ba435049ad20a4af85b2f2d201801a5da9edf37f9f61b27284ccb0c798fbbaf,PodSandboxId:3da3325cd282ffb88aa1eed45cb1758e8dd5886923e0537475bf8b2062a708f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761592306825227460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d75616c8b2e6db9ba925e56dac14f36d,},Annotations:map[string]string{io.kubern
etes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad24ef975b266d69bc17f4c4acc398bd352926f4e8a86e50c84e2a2bcd7f3c5,PodSandboxId:62e2964894758c1aaed419fd373fdedd326f471efed3168345a8f2e5c1a18256,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761592306781617058,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074768,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 84d9d1d3cdf44b76588eee6ed2c2ed23,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c257ffa5d58fa9fcc17ef7974358549bf09938dff7426d6a957c613b8071e622,PodSandboxId:44c1dc33c037b1964e803b03fbe87ef48ced6c62d21ce6bbe99abb10c7d606bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761592306763270982,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1419f6f2dbf7cdc64e36c7697d572358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b855533e3a4aa01919810611cc1998e2734d17f27ed9d4381f184b8ae2c789e,PodSandboxId:c0d55bbc8d1fa75dc2bac99e9e6c5070f6a924127df831c2e344b53b4d4dcf73,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,Crea
tedAt:1761592304383828785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lv8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a05ec0c93cfd736d12c8313747186f27db2021dc9a371d8d3a4438f992549069,PodSandboxId:4f2d4092ef8ce6f2ecebbd65772d6c0e4c7
5becabf6c5c71e3a7ab8a6578edd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761592303924560683,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp2k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538c55aa-9e90-4bb3-83b7-f84cce86edca,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba777ad788f45b33f81f28d7aecebc6c59953bb838ed7f23c33832f2534a5179,PodSandboxId:f27f5919b6af890342c38a8769648e9c4a65ce64cb55dea1cf0342776751bca4,Meta
data:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761592275224568234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ca78600-3d29-4edc-9a1c-572cf646e83e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f6a57ec258cde7cb5f391bf83a85ba008e698c156047225611c8eab6c6babc0,PodSandboxId:c1e8123b0e1087e5aab8a8a7c7bcef0af0672ff02d81366f3f783974115ad5e4,Metadata:&Contain
erMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761592271488084282,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1419f6f2dbf7cdc64e36c7697d572358,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa2d3b5fc751c052fe689b7cd2e4fd4d601c169b76aa924543645ff13d971195,PodSandboxId:af345
d07331bc89836a28c1110a92060b79b70e6fc12a33f95e0f664b95113e3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761592271427253610,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84d9d1d3cdf44b76588eee6ed2c2ed23,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:60dc886cc82b05339032a646f43bb214cf04fc6adf15adc4e8500e3ad3d16a78,PodSandboxId:325376ebe73d19cdba4b91a0c3b50e525bc1fe4cedb8425acabd64180bb74da3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761592271436590190,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-074768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d75616c8b2e6db9ba925e56dac14f36d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b88ac5c8b637643251fc39d80e23070b4065a79ea8af3e53205465bf22ed227c,PodSandboxId:a4731d6d18ca9bedf0f80e8d150c0dd35d01d2a54d12375e767efe9ab4d14ac1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761592267049795999,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2lv8d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f607e8f-f4a5-475f-8bdb-d9c2889d5ada,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=36ba35cc-4196-4d3b-b66e-35d3646847a5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b1324210b99e8       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   10 minutes ago      Exited              mount-munger              0                   542a950a6c940       busybox-mount
	5e03d8878eeba       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      11 minutes ago      Running             kube-proxy                3                   4f2d4092ef8ce       kube-proxy-lp2k8
	d7b0eb17be9e9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 minutes ago      Running             storage-provisioner       3                   f6ffd9e1d08da       storage-provisioner
	610b55ad8d57b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      11 minutes ago      Running             kube-apiserver            0                   ea9b6a12a7c9a       kube-apiserver-functional-074768
	1ba435049ad20       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      11 minutes ago      Running             kube-controller-manager   3                   3da3325cd282f       kube-controller-manager-functional-074768
	3ad24ef975b26       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      11 minutes ago      Running             kube-scheduler            3                   62e2964894758       kube-scheduler-functional-074768
	c257ffa5d58fa       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      11 minutes ago      Running             etcd                      3                   44c1dc33c037b       etcd-functional-074768
	8b855533e3a4a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 minutes ago      Running             coredns                   2                   c0d55bbc8d1fa       coredns-66bc5c9577-2lv8d
	a05ec0c93cfd7       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      11 minutes ago      Exited              kube-proxy                2                   4f2d4092ef8ce       kube-proxy-lp2k8
	ba777ad788f45       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 minutes ago      Exited              storage-provisioner       2                   f27f5919b6af8       storage-provisioner
	8f6a57ec258cd       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      11 minutes ago      Exited              etcd                      2                   c1e8123b0e108       etcd-functional-074768
	60dc886cc82b0       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      11 minutes ago      Exited              kube-controller-manager   2                   325376ebe73d1       kube-controller-manager-functional-074768
	fa2d3b5fc751c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      11 minutes ago      Exited              kube-scheduler            2                   af345d07331bc       kube-scheduler-functional-074768
	b88ac5c8b6376       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 minutes ago      Exited              coredns                   1                   a4731d6d18ca9       coredns-66bc5c9577-2lv8d
	
	
	==> coredns [8b855533e3a4aa01919810611cc1998e2734d17f27ed9d4381f184b8ae2c789e] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45293 - 5246 "HINFO IN 454679650042713632.2272985414247109723. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.039332227s
	
	
	==> coredns [b88ac5c8b637643251fc39d80e23070b4065a79ea8af3e53205465bf22ed227c] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53381 - 18232 "HINFO IN 6855518255260926845.7182282404724631670. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.025903505s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-074768
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-074768
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=functional-074768
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T19_10_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 19:10:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-074768
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 19:22:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 19:18:48 +0000   Mon, 27 Oct 2025 19:10:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 19:18:48 +0000   Mon, 27 Oct 2025 19:10:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 19:18:48 +0000   Mon, 27 Oct 2025 19:10:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 19:18:48 +0000   Mon, 27 Oct 2025 19:10:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.117
	  Hostname:    functional-074768
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 d60c87697e45438394c451d2f7a36472
	  System UUID:                d60c8769-7e45-4383-94c4-51d2f7a36472
	  Boot ID:                    59f8f872-6752-425e-9853-f7970fb836c8
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-kbc8s                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m37s
	  default                     hello-node-connect-7d85dfc575-ppg9q           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-zrxgm                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-2lv8d                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-functional-074768                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-074768              250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-functional-074768     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-lp2k8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-074768              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-vfwcs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-7xqm9         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-074768 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-074768 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node functional-074768 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                12m                kubelet          Node functional-074768 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-074768 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-074768 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-074768 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           12m                node-controller  Node functional-074768 event: Registered Node functional-074768 in Controller
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-074768 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-074768 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-074768 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           11m                node-controller  Node functional-074768 event: Registered Node functional-074768 in Controller
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-074768 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-074768 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-074768 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-074768 event: Registered Node functional-074768 in Controller
	
	
	==> dmesg <==
	[  +0.000050] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.009365] (rpcbind)[117]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.178192] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000026] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000004] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.089795] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.107366] kauditd_printk_skb: 130 callbacks suppressed
	[Oct27 19:10] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.009137] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.877352] kauditd_printk_skb: 249 callbacks suppressed
	[ +30.932062] kauditd_printk_skb: 38 callbacks suppressed
	[Oct27 19:11] kauditd_printk_skb: 350 callbacks suppressed
	[  +4.462417] kauditd_printk_skb: 81 callbacks suppressed
	[  +6.561811] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.109963] kauditd_printk_skb: 24 callbacks suppressed
	[  +0.392463] kauditd_printk_skb: 303 callbacks suppressed
	[  +1.946868] kauditd_printk_skb: 108 callbacks suppressed
	[Oct27 19:12] kauditd_printk_skb: 8 callbacks suppressed
	[  +1.008563] kauditd_printk_skb: 91 callbacks suppressed
	[  +0.000147] kauditd_printk_skb: 152 callbacks suppressed
	[ +19.538164] kauditd_printk_skb: 26 callbacks suppressed
	[  +3.845026] kauditd_printk_skb: 31 callbacks suppressed
	[Oct27 19:13] kauditd_printk_skb: 38 callbacks suppressed
	[Oct27 19:18] crun[9565]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +2.727233] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [8f6a57ec258cde7cb5f391bf83a85ba008e698c156047225611c8eab6c6babc0] <==
	{"level":"warn","ts":"2025-10-27T19:11:13.796751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:13.803213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:13.820989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:13.827139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:13.850127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:13.864610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:13.965616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51752","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-27T19:11:34.886131Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-27T19:11:34.886250Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-074768","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.117:2380"],"advertise-client-urls":["https://192.168.39.117:2379"]}
	{"level":"error","ts":"2025-10-27T19:11:34.886308Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-27T19:11:34.969542Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-27T19:11:34.969623Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T19:11:34.969765Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d85ef093c7464643","current-leader-member-id":"d85ef093c7464643"}
	{"level":"info","ts":"2025-10-27T19:11:34.969860Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-27T19:11:34.969869Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-27T19:11:34.969972Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-27T19:11:34.970059Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-27T19:11:34.970068Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-27T19:11:34.970117Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.117:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-27T19:11:34.970124Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.117:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-27T19:11:34.970130Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.117:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T19:11:34.974242Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.117:2380"}
	{"level":"error","ts":"2025-10-27T19:11:34.974325Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.117:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T19:11:34.974349Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.117:2380"}
	{"level":"info","ts":"2025-10-27T19:11:34.974355Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-074768","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.117:2380"],"advertise-client-urls":["https://192.168.39.117:2379"]}
	
	
	==> etcd [c257ffa5d58fa9fcc17ef7974358549bf09938dff7426d6a957c613b8071e622] <==
	{"level":"warn","ts":"2025-10-27T19:11:48.731621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.774630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.777064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.803070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.831051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.844794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.869011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.906600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.925550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.942908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.960546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:48.988468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.001773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.013937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.028896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.043608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.055856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.067861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.085770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.088921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.104882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:11:49.204360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60912","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-27T19:21:47.882964Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1106}
	{"level":"info","ts":"2025-10-27T19:21:47.906946Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1106,"took":"23.35299ms","hash":1242694967,"current-db-size-bytes":3502080,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":1609728,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-10-27T19:21:47.907008Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1242694967,"revision":1106,"compact-revision":-1}
	
	
	==> kernel <==
	 19:22:57 up 13 min,  0 users,  load average: 0.34, 0.43, 0.33
	Linux functional-074768 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [610b55ad8d57b43b4f9f14aca129a7c3f93464db3fb0040f0869e97828d3942e] <==
	I1027 19:11:49.997079       1 cache.go:39] Caches are synced for autoregister controller
	I1027 19:11:49.997909       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 19:11:50.006465       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 19:11:50.020163       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1027 19:11:50.020385       1 policy_source.go:240] refreshing policies
	I1027 19:11:50.029748       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:11:50.038132       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 19:11:50.043717       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1027 19:11:50.135960       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 19:11:50.786831       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 19:11:51.391278       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 19:11:51.442935       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1027 19:11:51.470900       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 19:11:51.478456       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 19:11:53.405340       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 19:11:53.556350       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 19:11:56.001360       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 19:12:10.771036       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.125.173"}
	I1027 19:12:16.293068       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.108.218.230"}
	I1027 19:12:17.549055       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 19:12:18.028917       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.109.120"}
	I1027 19:12:18.065792       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.113.167"}
	I1027 19:12:55.519802       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.101.18.82"}
	I1027 19:17:20.567427       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.110.206.225"}
	I1027 19:21:49.945423       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [1ba435049ad20a4af85b2f2d201801a5da9edf37f9f61b27284ccb0c798fbbaf] <==
	I1027 19:11:53.315333       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1027 19:11:53.315385       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1027 19:11:53.315342       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1027 19:11:53.316605       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 19:11:53.318987       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 19:11:53.322538       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 19:11:53.326745       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 19:11:53.327969       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 19:11:53.330177       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 19:11:53.338701       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1027 19:11:53.344965       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:11:53.347598       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 19:11:53.352566       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 19:11:53.352728       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 19:11:53.352755       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 19:11:53.361178       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 19:11:53.366609       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	E1027 19:12:17.675984       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:12:17.699011       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:12:17.727301       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:12:17.729874       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:12:17.738567       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:12:17.749280       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:12:17.760742       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:12:17.769967       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [60dc886cc82b05339032a646f43bb214cf04fc6adf15adc4e8500e3ad3d16a78] <==
	I1027 19:11:18.053781       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1027 19:11:18.053872       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 19:11:18.053945       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 19:11:18.054071       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1027 19:11:18.054160       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 19:11:18.054285       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 19:11:18.055457       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 19:11:18.055619       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1027 19:11:18.056424       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 19:11:18.056636       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:11:18.056725       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 19:11:18.056740       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 19:11:18.065223       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 19:11:18.065380       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 19:11:18.065450       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-074768"
	I1027 19:11:18.065513       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1027 19:11:18.067483       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:11:18.067852       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1027 19:11:18.068636       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 19:11:18.071317       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 19:11:18.075240       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 19:11:18.078873       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1027 19:11:18.082257       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 19:11:18.095993       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 19:11:18.103590       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [5e03d8878eeba1200689ad3ffc75889ca6da8adb3ca6422abe3c15b802418bfe] <==
	I1027 19:11:50.619204       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 19:11:50.719751       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 19:11:50.719779       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.117"]
	E1027 19:11:50.719851       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 19:11:50.759219       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1027 19:11:50.759283       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1027 19:11:50.759314       1 server_linux.go:132] "Using iptables Proxier"
	I1027 19:11:50.769903       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 19:11:50.770217       1 server.go:527] "Version info" version="v1.34.1"
	I1027 19:11:50.770431       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:11:50.775267       1 config.go:200] "Starting service config controller"
	I1027 19:11:50.775302       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 19:11:50.775316       1 config.go:106] "Starting endpoint slice config controller"
	I1027 19:11:50.775319       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 19:11:50.775912       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 19:11:50.775940       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 19:11:50.781346       1 config.go:309] "Starting node config controller"
	I1027 19:11:50.785856       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 19:11:50.786257       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 19:11:50.876443       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 19:11:50.876173       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 19:11:50.876931       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [a05ec0c93cfd736d12c8313747186f27db2021dc9a371d8d3a4438f992549069] <==
	I1027 19:11:44.169183       1 server_linux.go:53] "Using iptables proxy"
	I1027 19:11:44.246720       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1027 19:11:44.249261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-074768&limit=500&resourceVersion=0\": dial tcp 192.168.39.117:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [3ad24ef975b266d69bc17f4c4acc398bd352926f4e8a86e50c84e2a2bcd7f3c5] <==
	I1027 19:11:48.514616       1 serving.go:386] Generated self-signed cert in-memory
	W1027 19:11:49.895123       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 19:11:49.895488       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 19:11:49.895519       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 19:11:49.895728       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 19:11:49.951216       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 19:11:49.952748       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:11:49.964352       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:11:49.964482       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:11:49.964562       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 19:11:49.964647       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 19:11:50.065435       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [fa2d3b5fc751c052fe689b7cd2e4fd4d601c169b76aa924543645ff13d971195] <==
	I1027 19:11:12.865255       1 serving.go:386] Generated self-signed cert in-memory
	W1027 19:11:14.587934       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 19:11:14.588023       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 19:11:14.588034       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 19:11:14.588040       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 19:11:14.713266       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 19:11:14.713415       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:11:14.715890       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:11:14.715936       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:11:14.716109       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 19:11:14.716175       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 19:11:14.817022       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:11:34.878344       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:11:34.880247       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1027 19:11:34.880379       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1027 19:11:34.880567       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1027 19:11:34.882011       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1027 19:11:34.882253       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 27 19:22:16 functional-074768 kubelet[6302]: E1027 19:22:16.447560    6302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761592936445569675  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Oct 27 19:22:25 functional-074768 kubelet[6302]: E1027 19:22:25.139165    6302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-ppg9q" podUID="66fec0d9-6763-4ac7-be30-631c20dcc46e"
	Oct 27 19:22:25 functional-074768 kubelet[6302]: E1027 19:22:25.140127    6302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7xqm9" podUID="672ab189-2efb-4820-827e-d59baf07200c"
	Oct 27 19:22:25 functional-074768 kubelet[6302]: E1027 19:22:25.930997    6302 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 27 19:22:25 functional-074768 kubelet[6302]: E1027 19:22:25.931052    6302 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 27 19:22:25 functional-074768 kubelet[6302]: E1027 19:22:25.931525    6302 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-vfwcs_kubernetes-dashboard(925f4e38-9e4f-48d5-8c9c-0074a4032738): ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 27 19:22:25 functional-074768 kubelet[6302]: E1027 19:22:25.931564    6302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vfwcs" podUID="925f4e38-9e4f-48d5-8c9c-0074a4032738"
	Oct 27 19:22:26 functional-074768 kubelet[6302]: E1027 19:22:26.448909    6302 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761592946448577379  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Oct 27 19:22:26 functional-074768 kubelet[6302]: E1027 19:22:26.449229    6302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761592946448577379  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Oct 27 19:22:36 functional-074768 kubelet[6302]: E1027 19:22:36.139110    6302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-ppg9q" podUID="66fec0d9-6763-4ac7-be30-631c20dcc46e"
	Oct 27 19:22:36 functional-074768 kubelet[6302]: E1027 19:22:36.455111    6302 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761592956454294327  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Oct 27 19:22:36 functional-074768 kubelet[6302]: E1027 19:22:36.455150    6302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761592956454294327  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Oct 27 19:22:37 functional-074768 kubelet[6302]: E1027 19:22:37.140270    6302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7xqm9" podUID="672ab189-2efb-4820-827e-d59baf07200c"
	Oct 27 19:22:41 functional-074768 kubelet[6302]: E1027 19:22:41.140033    6302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vfwcs" podUID="925f4e38-9e4f-48d5-8c9c-0074a4032738"
	Oct 27 19:22:46 functional-074768 kubelet[6302]: E1027 19:22:46.263544    6302 manager.go:1116] Failed to create existing container: /kubepods/burstable/podd75616c8b2e6db9ba925e56dac14f36d/crio-325376ebe73d19cdba4b91a0c3b50e525bc1fe4cedb8425acabd64180bb74da3: Error finding container 325376ebe73d19cdba4b91a0c3b50e525bc1fe4cedb8425acabd64180bb74da3: Status 404 returned error can't find the container with id 325376ebe73d19cdba4b91a0c3b50e525bc1fe4cedb8425acabd64180bb74da3
	Oct 27 19:22:46 functional-074768 kubelet[6302]: E1027 19:22:46.263993    6302 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod5f607e8f-f4a5-475f-8bdb-d9c2889d5ada/crio-a4731d6d18ca9bedf0f80e8d150c0dd35d01d2a54d12375e767efe9ab4d14ac1: Error finding container a4731d6d18ca9bedf0f80e8d150c0dd35d01d2a54d12375e767efe9ab4d14ac1: Status 404 returned error can't find the container with id a4731d6d18ca9bedf0f80e8d150c0dd35d01d2a54d12375e767efe9ab4d14ac1
	Oct 27 19:22:46 functional-074768 kubelet[6302]: E1027 19:22:46.264484    6302 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod84d9d1d3cdf44b76588eee6ed2c2ed23/crio-af345d07331bc89836a28c1110a92060b79b70e6fc12a33f95e0f664b95113e3: Error finding container af345d07331bc89836a28c1110a92060b79b70e6fc12a33f95e0f664b95113e3: Status 404 returned error can't find the container with id af345d07331bc89836a28c1110a92060b79b70e6fc12a33f95e0f664b95113e3
	Oct 27 19:22:46 functional-074768 kubelet[6302]: E1027 19:22:46.266045    6302 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod8ca78600-3d29-4edc-9a1c-572cf646e83e/crio-f27f5919b6af890342c38a8769648e9c4a65ce64cb55dea1cf0342776751bca4: Error finding container f27f5919b6af890342c38a8769648e9c4a65ce64cb55dea1cf0342776751bca4: Status 404 returned error can't find the container with id f27f5919b6af890342c38a8769648e9c4a65ce64cb55dea1cf0342776751bca4
	Oct 27 19:22:46 functional-074768 kubelet[6302]: E1027 19:22:46.266863    6302 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod1419f6f2dbf7cdc64e36c7697d572358/crio-c1e8123b0e1087e5aab8a8a7c7bcef0af0672ff02d81366f3f783974115ad5e4: Error finding container c1e8123b0e1087e5aab8a8a7c7bcef0af0672ff02d81366f3f783974115ad5e4: Status 404 returned error can't find the container with id c1e8123b0e1087e5aab8a8a7c7bcef0af0672ff02d81366f3f783974115ad5e4
	Oct 27 19:22:46 functional-074768 kubelet[6302]: E1027 19:22:46.457982    6302 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761592966457393326  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Oct 27 19:22:46 functional-074768 kubelet[6302]: E1027 19:22:46.458005    6302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761592966457393326  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Oct 27 19:22:49 functional-074768 kubelet[6302]: E1027 19:22:49.142731    6302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7xqm9" podUID="672ab189-2efb-4820-827e-d59baf07200c"
	Oct 27 19:22:54 functional-074768 kubelet[6302]: E1027 19:22:54.141594    6302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vfwcs" podUID="925f4e38-9e4f-48d5-8c9c-0074a4032738"
	Oct 27 19:22:56 functional-074768 kubelet[6302]: E1027 19:22:56.467224    6302 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761592976465487913  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	Oct 27 19:22:56 functional-074768 kubelet[6302]: E1027 19:22:56.467253    6302 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761592976465487913  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201237}  inodes_used:{value:103}}"
	
	
	==> storage-provisioner [ba777ad788f45b33f81f28d7aecebc6c59953bb838ed7f23c33832f2534a5179] <==
	I1027 19:11:15.295791       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 19:11:15.304293       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 19:11:15.304711       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1027 19:11:15.307781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:11:18.764063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:11:23.031174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:11:26.630899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:11:29.685570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:11:32.711342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:11:32.720392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 19:11:32.720513       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 19:11:32.721774       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-074768_b4323ab1-ff60-4e7e-907d-d6d67bc9f70a!
	I1027 19:11:32.722233       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"da7c4387-b27a-4f7e-ae17-d5eda90d8a7d", APIVersion:"v1", ResourceVersion:"574", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-074768_b4323ab1-ff60-4e7e-907d-d6d67bc9f70a became leader
	W1027 19:11:32.734342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:11:32.743127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 19:11:32.822877       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-074768_b4323ab1-ff60-4e7e-907d-d6d67bc9f70a!
	W1027 19:11:34.746637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:11:34.755281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d7b0eb17be9e97894e47220b4bbb2efa859e1155cbe7cff15e03d68b41c28284] <==
	W1027 19:22:33.491207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:33.496281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:35.500426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:35.509119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:37.512143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:37.517131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:39.520408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:39.525024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:41.528421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:41.537133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:43.540951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:43.547190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:45.550839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:45.561184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:47.564368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:47.570513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:49.574785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:49.579892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:51.583807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:51.591401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:53.595078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:53.600983       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:55.605618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:55.615926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:22:57.621032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-074768 -n functional-074768
helpers_test.go:269: (dbg) Run:  kubectl --context functional-074768 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-kbc8s hello-node-connect-7d85dfc575-ppg9q mysql-5bb876957f-zrxgm sp-pod dashboard-metrics-scraper-77bf4d6c4c-vfwcs kubernetes-dashboard-855c9754f9-7xqm9
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-074768 describe pod busybox-mount hello-node-75c85bcc94-kbc8s hello-node-connect-7d85dfc575-ppg9q mysql-5bb876957f-zrxgm sp-pod dashboard-metrics-scraper-77bf4d6c4c-vfwcs kubernetes-dashboard-855c9754f9-7xqm9
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-074768 describe pod busybox-mount hello-node-75c85bcc94-kbc8s hello-node-connect-7d85dfc575-ppg9q mysql-5bb876957f-zrxgm sp-pod dashboard-metrics-scraper-77bf4d6c4c-vfwcs kubernetes-dashboard-855c9754f9-7xqm9: exit status 1 (109.27079ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-074768/192.168.39.117
	Start Time:       Mon, 27 Oct 2025 19:12:16 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://b1324210b99e8b9f9c9065aa41dc3791c8db409d8aa5ab8f236b2399ccb7bdb7
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 27 Oct 2025 19:12:49 +0000
	      Finished:     Mon, 27 Oct 2025 19:12:49 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zc6qh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-zc6qh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-074768
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.582s (32.731s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-kbc8s
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-074768/192.168.39.117
	Start Time:       Mon, 27 Oct 2025 19:17:20 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:           10.244.0.13
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8l6jd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8l6jd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  5m38s                 default-scheduler  Successfully assigned default/hello-node-75c85bcc94-kbc8s to functional-074768
	  Warning  Failed     3m17s                 kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m17s                 kubelet            Error: ErrImagePull
	  Normal   BackOff    3m17s                 kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     3m17s                 kubelet            Error: ImagePullBackOff
	  Normal   Pulling    3m2s (x2 over 5m37s)  kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-ppg9q
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-074768/192.168.39.117
	Start Time:       Mon, 27 Oct 2025 19:12:16 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t7d4k (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-t7d4k:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-ppg9q to functional-074768
	  Warning  Failed     10m                  kubelet            Failed to pull image "kicbase/echo-server": copying system image from manifest list: determining manifest MIME type for docker://kicbase/echo-server:latest: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     93s (x4 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     93s (x3 over 7m32s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    22s (x11 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     22s (x11 over 10m)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    9s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-zrxgm
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-074768/192.168.39.117
	Start Time:       Mon, 27 Oct 2025 19:12:55 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:           10.244.0.12
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rxvwx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rxvwx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/mysql-5bb876957f-zrxgm to functional-074768
	  Warning  Failed     5m18s (x2 over 8m2s)  kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m3s (x3 over 8m2s)   kubelet            Error: ErrImagePull
	  Warning  Failed     2m3s                  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    84s (x5 over 8m1s)    kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     84s (x5 over 8m1s)    kubelet            Error: ImagePullBackOff
	  Normal   Pulling    73s (x4 over 10m)     kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-074768/192.168.39.117
	Start Time:       Mon, 27 Oct 2025 19:12:27 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gmtcg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-gmtcg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/sp-pod to functional-074768
	  Warning  Failed     6m2s (x2 over 8m38s)   kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m33s (x3 over 8m38s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m33s                  kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    114s (x5 over 8m37s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     114s (x5 over 8m37s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    103s (x4 over 10m)     kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-vfwcs" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-7xqm9" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-074768 describe pod busybox-mount hello-node-75c85bcc94-kbc8s hello-node-connect-7d85dfc575-ppg9q mysql-5bb876957f-zrxgm sp-pod dashboard-metrics-scraper-77bf4d6c4c-vfwcs kubernetes-dashboard-855c9754f9-7xqm9: exit status 1
E1027 19:23:46.121018   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:25:09.196446   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctional/parallel/MySQL (602.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-074768 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-074768 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-kbc8s" [33daa80e-0304-4282-ae15-24c385efbdb3] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-074768 -n functional-074768
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-27 19:27:20.821152376 +0000 UTC m=+1868.745694484
functional_test.go:1460: (dbg) Run:  kubectl --context functional-074768 describe po hello-node-75c85bcc94-kbc8s -n default
functional_test.go:1460: (dbg) kubectl --context functional-074768 describe po hello-node-75c85bcc94-kbc8s -n default:
Name:             hello-node-75c85bcc94-kbc8s
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-074768/192.168.39.117
Start Time:       Mon, 27 Oct 2025 19:17:20 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.13
IPs:
IP:           10.244.0.13
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8l6jd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-8l6jd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-kbc8s to functional-074768
Warning  Failed     7m39s                kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     94s (x3 over 7m39s)  kubelet            Error: ErrImagePull
Warning  Failed     94s (x2 over 4m18s)  kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    55s (x5 over 7m39s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     55s (x5 over 7m39s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    40s (x4 over 9m59s)  kubelet            Pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-074768 logs hello-node-75c85bcc94-kbc8s -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-074768 logs hello-node-75c85bcc94-kbc8s -n default: exit status 1 (71.750292ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-kbc8s" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-074768 logs hello-node-75c85bcc94-kbc8s -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074768 service --namespace=default --https --url hello-node: exit status 115 (251.339678ms)

                                                
                                                
-- stdout --
	https://192.168.39.117:31959
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-074768 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074768 service hello-node --url --format={{.IP}}: exit status 115 (242.862566ms)

                                                
                                                
-- stdout --
	192.168.39.117
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-074768 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074768 service hello-node --url: exit status 115 (238.68508ms)

                                                
                                                
-- stdout --
	http://192.168.39.117:31959
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-074768 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.117:31959
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.24s)

                                                
                                    
x
+
TestPreload (161.93s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-961693 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-961693 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m41.358448969s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-961693 image pull gcr.io/k8s-minikube/busybox
E1027 20:07:16.241369   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/functional-074768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-961693 image pull gcr.io/k8s-minikube/busybox: (2.380946558s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-961693
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-961693: (6.95717712s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-961693 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-961693 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (48.369871405s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-961693 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-10-27 20:08:13.930178626 +0000 UTC m=+4321.854720734
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-961693 -n test-preload-961693
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-961693 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-961693 logs -n 25: (1.126480021s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-449598 ssh -n multinode-449598-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-449598     │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │ 27 Oct 25 19:54 UTC │
	│ ssh     │ multinode-449598 ssh -n multinode-449598 sudo cat /home/docker/cp-test_multinode-449598-m03_multinode-449598.txt                                          │ multinode-449598     │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │ 27 Oct 25 19:54 UTC │
	│ cp      │ multinode-449598 cp multinode-449598-m03:/home/docker/cp-test.txt multinode-449598-m02:/home/docker/cp-test_multinode-449598-m03_multinode-449598-m02.txt │ multinode-449598     │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │ 27 Oct 25 19:54 UTC │
	│ ssh     │ multinode-449598 ssh -n multinode-449598-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-449598     │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │ 27 Oct 25 19:54 UTC │
	│ ssh     │ multinode-449598 ssh -n multinode-449598-m02 sudo cat /home/docker/cp-test_multinode-449598-m03_multinode-449598-m02.txt                                  │ multinode-449598     │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │ 27 Oct 25 19:54 UTC │
	│ node    │ multinode-449598 node stop m03                                                                                                                            │ multinode-449598     │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │ 27 Oct 25 19:55 UTC │
	│ node    │ multinode-449598 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-449598     │ jenkins │ v1.37.0 │ 27 Oct 25 19:55 UTC │ 27 Oct 25 19:55 UTC │
	│ node    │ list -p multinode-449598                                                                                                                                  │ multinode-449598     │ jenkins │ v1.37.0 │ 27 Oct 25 19:55 UTC │                     │
	│ stop    │ -p multinode-449598                                                                                                                                       │ multinode-449598     │ jenkins │ v1.37.0 │ 27 Oct 25 19:55 UTC │ 27 Oct 25 19:58 UTC │
	│ start   │ -p multinode-449598 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-449598     │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 20:00 UTC │
	│ node    │ list -p multinode-449598                                                                                                                                  │ multinode-449598     │ jenkins │ v1.37.0 │ 27 Oct 25 20:00 UTC │                     │
	│ node    │ multinode-449598 node delete m03                                                                                                                          │ multinode-449598     │ jenkins │ v1.37.0 │ 27 Oct 25 20:00 UTC │ 27 Oct 25 20:00 UTC │
	│ stop    │ multinode-449598 stop                                                                                                                                     │ multinode-449598     │ jenkins │ v1.37.0 │ 27 Oct 25 20:00 UTC │ 27 Oct 25 20:03 UTC │
	│ start   │ -p multinode-449598 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-449598     │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │ 27 Oct 25 20:04 UTC │
	│ node    │ list -p multinode-449598                                                                                                                                  │ multinode-449598     │ jenkins │ v1.37.0 │ 27 Oct 25 20:04 UTC │                     │
	│ start   │ -p multinode-449598-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-449598-m02 │ jenkins │ v1.37.0 │ 27 Oct 25 20:04 UTC │                     │
	│ start   │ -p multinode-449598-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-449598-m03 │ jenkins │ v1.37.0 │ 27 Oct 25 20:04 UTC │ 27 Oct 25 20:05 UTC │
	│ node    │ add -p multinode-449598                                                                                                                                   │ multinode-449598     │ jenkins │ v1.37.0 │ 27 Oct 25 20:05 UTC │                     │
	│ delete  │ -p multinode-449598-m03                                                                                                                                   │ multinode-449598-m03 │ jenkins │ v1.37.0 │ 27 Oct 25 20:05 UTC │ 27 Oct 25 20:05 UTC │
	│ delete  │ -p multinode-449598                                                                                                                                       │ multinode-449598     │ jenkins │ v1.37.0 │ 27 Oct 25 20:05 UTC │ 27 Oct 25 20:05 UTC │
	│ start   │ -p test-preload-961693 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0   │ test-preload-961693  │ jenkins │ v1.37.0 │ 27 Oct 25 20:05 UTC │ 27 Oct 25 20:07 UTC │
	│ image   │ test-preload-961693 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-961693  │ jenkins │ v1.37.0 │ 27 Oct 25 20:07 UTC │ 27 Oct 25 20:07 UTC │
	│ stop    │ -p test-preload-961693                                                                                                                                    │ test-preload-961693  │ jenkins │ v1.37.0 │ 27 Oct 25 20:07 UTC │ 27 Oct 25 20:07 UTC │
	│ start   │ -p test-preload-961693 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                           │ test-preload-961693  │ jenkins │ v1.37.0 │ 27 Oct 25 20:07 UTC │ 27 Oct 25 20:08 UTC │
	│ image   │ test-preload-961693 image list                                                                                                                            │ test-preload-961693  │ jenkins │ v1.37.0 │ 27 Oct 25 20:08 UTC │ 27 Oct 25 20:08 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 20:07:25
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 20:07:25.418765   89696 out.go:360] Setting OutFile to fd 1 ...
	I1027 20:07:25.419058   89696 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:07:25.419070   89696 out.go:374] Setting ErrFile to fd 2...
	I1027 20:07:25.419074   89696 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:07:25.419251   89696 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-58821/.minikube/bin
	I1027 20:07:25.419751   89696 out.go:368] Setting JSON to false
	I1027 20:07:25.420593   89696 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":10195,"bootTime":1761585450,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 20:07:25.420682   89696 start.go:141] virtualization: kvm guest
	I1027 20:07:25.422661   89696 out.go:179] * [test-preload-961693] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 20:07:25.423995   89696 notify.go:220] Checking for updates...
	I1027 20:07:25.424059   89696 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 20:07:25.425612   89696 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 20:07:25.427053   89696 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-58821/kubeconfig
	I1027 20:07:25.428410   89696 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-58821/.minikube
	I1027 20:07:25.429681   89696 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 20:07:25.431139   89696 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 20:07:25.432991   89696 config.go:182] Loaded profile config "test-preload-961693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1027 20:07:25.435086   89696 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1027 20:07:25.436509   89696 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 20:07:25.473084   89696 out.go:179] * Using the kvm2 driver based on existing profile
	I1027 20:07:25.474637   89696 start.go:305] selected driver: kvm2
	I1027 20:07:25.474655   89696 start.go:925] validating driver "kvm2" against &{Name:test-preload-961693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-961693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:07:25.474768   89696 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 20:07:25.475731   89696 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 20:07:25.475768   89696 cni.go:84] Creating CNI manager for ""
	I1027 20:07:25.475812   89696 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 20:07:25.475851   89696 start.go:349] cluster config:
	{Name:test-preload-961693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-961693 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:07:25.475938   89696 iso.go:125] acquiring lock: {Name:mkbd04910579486806c142a651be4f82498c73ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 20:07:25.477638   89696 out.go:179] * Starting "test-preload-961693" primary control-plane node in "test-preload-961693" cluster
	I1027 20:07:25.479042   89696 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1027 20:07:25.508933   89696 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1027 20:07:25.508959   89696 cache.go:58] Caching tarball of preloaded images
	I1027 20:07:25.509130   89696 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1027 20:07:25.511043   89696 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1027 20:07:25.512276   89696 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1027 20:07:25.539704   89696 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1027 20:07:25.539759   89696 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21801-58821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1027 20:07:27.919904   89696 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1027 20:07:27.920141   89696 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/test-preload-961693/config.json ...
	I1027 20:07:27.920427   89696 start.go:360] acquireMachinesLock for test-preload-961693: {Name:mk93a855054c8dcf81931234082a94fdc68a4726 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1027 20:07:27.920510   89696 start.go:364] duration metric: took 56.098µs to acquireMachinesLock for "test-preload-961693"
	I1027 20:07:27.920534   89696 start.go:96] Skipping create...Using existing machine configuration
	I1027 20:07:27.920542   89696 fix.go:54] fixHost starting: 
	I1027 20:07:27.922689   89696 fix.go:112] recreateIfNeeded on test-preload-961693: state=Stopped err=<nil>
	W1027 20:07:27.922743   89696 fix.go:138] unexpected machine state, will restart: <nil>
	I1027 20:07:27.924687   89696 out.go:252] * Restarting existing kvm2 VM for "test-preload-961693" ...
	I1027 20:07:27.924764   89696 main.go:141] libmachine: starting domain...
	I1027 20:07:27.924778   89696 main.go:141] libmachine: ensuring networks are active...
	I1027 20:07:27.925553   89696 main.go:141] libmachine: Ensuring network default is active
	I1027 20:07:27.925984   89696 main.go:141] libmachine: Ensuring network mk-test-preload-961693 is active
	I1027 20:07:27.926498   89696 main.go:141] libmachine: getting domain XML...
	I1027 20:07:27.927636   89696 main.go:141] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-961693</name>
	  <uuid>af1f9cb1-3df1-4942-add6-1a7198b6a02f</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21801-58821/.minikube/machines/test-preload-961693/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21801-58821/.minikube/machines/test-preload-961693/test-preload-961693.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:a7:d0:d8'/>
	      <source network='mk-test-preload-961693'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:fa:41:70'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1027 20:07:29.190200   89696 main.go:141] libmachine: waiting for domain to start...
	I1027 20:07:29.191597   89696 main.go:141] libmachine: domain is now running
	I1027 20:07:29.191620   89696 main.go:141] libmachine: waiting for IP...
	I1027 20:07:29.192537   89696 main.go:141] libmachine: domain test-preload-961693 has defined MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:07:29.193108   89696 main.go:141] libmachine: domain test-preload-961693 has current primary IP address 192.168.39.215 and MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:07:29.193122   89696 main.go:141] libmachine: found domain IP: 192.168.39.215
	I1027 20:07:29.193129   89696 main.go:141] libmachine: reserving static IP address...
	I1027 20:07:29.193564   89696 main.go:141] libmachine: found host DHCP lease matching {name: "test-preload-961693", mac: "52:54:00:a7:d0:d8", ip: "192.168.39.215"} in network mk-test-preload-961693: {Iface:virbr1 ExpiryTime:2025-10-27 21:05:50 +0000 UTC Type:0 Mac:52:54:00:a7:d0:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-961693 Clientid:01:52:54:00:a7:d0:d8}
	I1027 20:07:29.193601   89696 main.go:141] libmachine: skip adding static IP to network mk-test-preload-961693 - found existing host DHCP lease matching {name: "test-preload-961693", mac: "52:54:00:a7:d0:d8", ip: "192.168.39.215"}
	I1027 20:07:29.193618   89696 main.go:141] libmachine: reserved static IP address 192.168.39.215 for domain test-preload-961693
	I1027 20:07:29.193623   89696 main.go:141] libmachine: waiting for SSH...
	I1027 20:07:29.193634   89696 main.go:141] libmachine: Getting to WaitForSSH function...
	I1027 20:07:29.195864   89696 main.go:141] libmachine: domain test-preload-961693 has defined MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:07:29.196287   89696 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a7:d0:d8", ip: ""} in network mk-test-preload-961693: {Iface:virbr1 ExpiryTime:2025-10-27 21:05:50 +0000 UTC Type:0 Mac:52:54:00:a7:d0:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-961693 Clientid:01:52:54:00:a7:d0:d8}
	I1027 20:07:29.196312   89696 main.go:141] libmachine: domain test-preload-961693 has defined IP address 192.168.39.215 and MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:07:29.196463   89696 main.go:141] libmachine: Using SSH client type: native
	I1027 20:07:29.196675   89696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1027 20:07:29.196685   89696 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1027 20:07:32.256419   89696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.215:22: connect: no route to host
	I1027 20:07:38.336373   89696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.215:22: connect: no route to host
	I1027 20:07:41.337746   89696 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.215:22: connect: connection refused
	I1027 20:07:44.440888   89696 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 20:07:44.444215   89696 main.go:141] libmachine: domain test-preload-961693 has defined MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:07:44.444635   89696 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a7:d0:d8", ip: ""} in network mk-test-preload-961693: {Iface:virbr1 ExpiryTime:2025-10-27 21:07:40 +0000 UTC Type:0 Mac:52:54:00:a7:d0:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-961693 Clientid:01:52:54:00:a7:d0:d8}
	I1027 20:07:44.444668   89696 main.go:141] libmachine: domain test-preload-961693 has defined IP address 192.168.39.215 and MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:07:44.444877   89696 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/test-preload-961693/config.json ...
	I1027 20:07:44.445091   89696 machine.go:93] provisionDockerMachine start ...
	I1027 20:07:44.447388   89696 main.go:141] libmachine: domain test-preload-961693 has defined MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:07:44.447760   89696 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a7:d0:d8", ip: ""} in network mk-test-preload-961693: {Iface:virbr1 ExpiryTime:2025-10-27 21:07:40 +0000 UTC Type:0 Mac:52:54:00:a7:d0:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-961693 Clientid:01:52:54:00:a7:d0:d8}
	I1027 20:07:44.447793   89696 main.go:141] libmachine: domain test-preload-961693 has defined IP address 192.168.39.215 and MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:07:44.447962   89696 main.go:141] libmachine: Using SSH client type: native
	I1027 20:07:44.448193   89696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1027 20:07:44.448207   89696 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 20:07:44.547115   89696 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1027 20:07:44.547156   89696 buildroot.go:166] provisioning hostname "test-preload-961693"
	I1027 20:07:44.550250   89696 main.go:141] libmachine: domain test-preload-961693 has defined MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:07:44.550689   89696 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a7:d0:d8", ip: ""} in network mk-test-preload-961693: {Iface:virbr1 ExpiryTime:2025-10-27 21:07:40 +0000 UTC Type:0 Mac:52:54:00:a7:d0:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-961693 Clientid:01:52:54:00:a7:d0:d8}
	I1027 20:07:44.550715   89696 main.go:141] libmachine: domain test-preload-961693 has defined IP address 192.168.39.215 and MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:07:44.550894   89696 main.go:141] libmachine: Using SSH client type: native
	I1027 20:07:44.551107   89696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1027 20:07:44.551124   89696 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-961693 && echo "test-preload-961693" | sudo tee /etc/hostname
	I1027 20:07:44.669582   89696 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-961693
	
	I1027 20:07:44.672918   89696 main.go:141] libmachine: domain test-preload-961693 has defined MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:07:44.673476   89696 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a7:d0:d8", ip: ""} in network mk-test-preload-961693: {Iface:virbr1 ExpiryTime:2025-10-27 21:07:40 +0000 UTC Type:0 Mac:52:54:00:a7:d0:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-961693 Clientid:01:52:54:00:a7:d0:d8}
	I1027 20:07:44.673507   89696 main.go:141] libmachine: domain test-preload-961693 has defined IP address 192.168.39.215 and MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:07:44.673689   89696 main.go:141] libmachine: Using SSH client type: native
	I1027 20:07:44.673891   89696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1027 20:07:44.673908   89696 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-961693' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-961693/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-961693' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 20:07:44.785701   89696 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 20:07:44.785735   89696 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21801-58821/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-58821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-58821/.minikube}
	I1027 20:07:44.785771   89696 buildroot.go:174] setting up certificates
	I1027 20:07:44.785783   89696 provision.go:84] configureAuth start
	I1027 20:07:44.789335   89696 main.go:141] libmachine: domain test-preload-961693 has defined MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:07:44.789776   89696 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a7:d0:d8", ip: ""} in network mk-test-preload-961693: {Iface:virbr1 ExpiryTime:2025-10-27 21:07:40 +0000 UTC Type:0 Mac:52:54:00:a7:d0:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-961693 Clientid:01:52:54:00:a7:d0:d8}
	I1027 20:07:44.789810   89696 main.go:141] libmachine: domain test-preload-961693 has defined IP address 192.168.39.215 and MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:07:44.793523   89696 main.go:141] libmachine: domain test-preload-961693 has defined MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:07:44.794277   89696 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a7:d0:d8", ip: ""} in network mk-test-preload-961693: {Iface:virbr1 ExpiryTime:2025-10-27 21:07:40 +0000 UTC Type:0 Mac:52:54:00:a7:d0:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-961693 Clientid:01:52:54:00:a7:d0:d8}
	I1027 20:07:44.794312   89696 main.go:141] libmachine: domain test-preload-961693 has defined IP address 192.168.39.215 and MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:07:44.794505   89696 provision.go:143] copyHostCerts
	I1027 20:07:44.794588   89696 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-58821/.minikube/ca.pem, removing ...
	I1027 20:07:44.794615   89696 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-58821/.minikube/ca.pem
	I1027 20:07:44.794719   89696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-58821/.minikube/ca.pem (1078 bytes)
	I1027 20:07:44.794842   89696 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-58821/.minikube/cert.pem, removing ...
	I1027 20:07:44.794856   89696 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-58821/.minikube/cert.pem
	I1027 20:07:44.794896   89696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-58821/.minikube/cert.pem (1123 bytes)
	I1027 20:07:44.794977   89696 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-58821/.minikube/key.pem, removing ...
	I1027 20:07:44.794987   89696 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-58821/.minikube/key.pem
	I1027 20:07:44.795025   89696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-58821/.minikube/key.pem (1675 bytes)
	I1027 20:07:44.795137   89696 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-58821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca-key.pem org=jenkins.test-preload-961693 san=[127.0.0.1 192.168.39.215 localhost minikube test-preload-961693]
	I1027 20:07:44.938061   89696 provision.go:177] copyRemoteCerts
	I1027 20:07:44.938129   89696 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 20:07:44.940659   89696 main.go:141] libmachine: domain test-preload-961693 has defined MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:07:44.941082   89696 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a7:d0:d8", ip: ""} in network mk-test-preload-961693: {Iface:virbr1 ExpiryTime:2025-10-27 21:07:40 +0000 UTC Type:0 Mac:52:54:00:a7:d0:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-961693 Clientid:01:52:54:00:a7:d0:d8}
	I1027 20:07:44.941107   89696 main.go:141] libmachine: domain test-preload-961693 has defined IP address 192.168.39.215 and MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:07:44.941314   89696 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/test-preload-961693/id_rsa Username:docker}
	I1027 20:07:45.023727   89696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 20:07:45.055100   89696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1027 20:07:45.086419   89696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 20:07:45.117828   89696 provision.go:87] duration metric: took 332.024953ms to configureAuth
	I1027 20:07:45.117861   89696 buildroot.go:189] setting minikube options for container-runtime
	I1027 20:07:45.118100   89696 config.go:182] Loaded profile config "test-preload-961693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1027 20:07:45.121198   89696 main.go:141] libmachine: domain test-preload-961693 has defined MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:07:45.121632   89696 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a7:d0:d8", ip: ""} in network mk-test-preload-961693: {Iface:virbr1 ExpiryTime:2025-10-27 21:07:40 +0000 UTC Type:0 Mac:52:54:00:a7:d0:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-961693 Clientid:01:52:54:00:a7:d0:d8}
	I1027 20:07:45.121659   89696 main.go:141] libmachine: domain test-preload-961693 has defined IP address 192.168.39.215 and MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:07:45.121856   89696 main.go:141] libmachine: Using SSH client type: native
	I1027 20:07:45.122121   89696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1027 20:07:45.122139   89696 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 20:07:45.369552   89696 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 20:07:45.369595   89696 machine.go:96] duration metric: took 924.488732ms to provisionDockerMachine
	I1027 20:07:45.369613   89696 start.go:293] postStartSetup for "test-preload-961693" (driver="kvm2")
	I1027 20:07:45.369628   89696 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 20:07:45.369714   89696 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 20:07:45.372915   89696 main.go:141] libmachine: domain test-preload-961693 has defined MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:07:45.373349   89696 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a7:d0:d8", ip: ""} in network mk-test-preload-961693: {Iface:virbr1 ExpiryTime:2025-10-27 21:07:40 +0000 UTC Type:0 Mac:52:54:00:a7:d0:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-961693 Clientid:01:52:54:00:a7:d0:d8}
	I1027 20:07:45.373383   89696 main.go:141] libmachine: domain test-preload-961693 has defined IP address 192.168.39.215 and MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:07:45.373573   89696 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/test-preload-961693/id_rsa Username:docker}
	I1027 20:07:45.455727   89696 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 20:07:45.461374   89696 info.go:137] Remote host: Buildroot 2025.02
	I1027 20:07:45.461427   89696 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-58821/.minikube/addons for local assets ...
	I1027 20:07:45.461508   89696 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-58821/.minikube/files for local assets ...
	I1027 20:07:45.461605   89696 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-58821/.minikube/files/etc/ssl/certs/627052.pem -> 627052.pem in /etc/ssl/certs
	I1027 20:07:45.461709   89696 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 20:07:45.474139   89696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/files/etc/ssl/certs/627052.pem --> /etc/ssl/certs/627052.pem (1708 bytes)
	I1027 20:07:45.505862   89696 start.go:296] duration metric: took 136.229298ms for postStartSetup
	I1027 20:07:45.505918   89696 fix.go:56] duration metric: took 17.585375844s for fixHost
	I1027 20:07:45.508467   89696 main.go:141] libmachine: domain test-preload-961693 has defined MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:07:45.508949   89696 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a7:d0:d8", ip: ""} in network mk-test-preload-961693: {Iface:virbr1 ExpiryTime:2025-10-27 21:07:40 +0000 UTC Type:0 Mac:52:54:00:a7:d0:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-961693 Clientid:01:52:54:00:a7:d0:d8}
	I1027 20:07:45.508977   89696 main.go:141] libmachine: domain test-preload-961693 has defined IP address 192.168.39.215 and MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:07:45.509211   89696 main.go:141] libmachine: Using SSH client type: native
	I1027 20:07:45.509439   89696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1027 20:07:45.509451   89696 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1027 20:07:45.611087   89696 main.go:141] libmachine: SSH cmd err, output: <nil>: 1761595665.561871077
	
	I1027 20:07:45.611110   89696 fix.go:216] guest clock: 1761595665.561871077
	I1027 20:07:45.611119   89696 fix.go:229] Guest: 2025-10-27 20:07:45.561871077 +0000 UTC Remote: 2025-10-27 20:07:45.505927974 +0000 UTC m=+20.136175448 (delta=55.943103ms)
	I1027 20:07:45.611135   89696 fix.go:200] guest clock delta is within tolerance: 55.943103ms
	I1027 20:07:45.611141   89696 start.go:83] releasing machines lock for "test-preload-961693", held for 17.690616359s
	I1027 20:07:45.614027   89696 main.go:141] libmachine: domain test-preload-961693 has defined MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:07:45.614468   89696 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a7:d0:d8", ip: ""} in network mk-test-preload-961693: {Iface:virbr1 ExpiryTime:2025-10-27 21:07:40 +0000 UTC Type:0 Mac:52:54:00:a7:d0:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-961693 Clientid:01:52:54:00:a7:d0:d8}
	I1027 20:07:45.614493   89696 main.go:141] libmachine: domain test-preload-961693 has defined IP address 192.168.39.215 and MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:07:45.615052   89696 ssh_runner.go:195] Run: cat /version.json
	I1027 20:07:45.615129   89696 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 20:07:45.618301   89696 main.go:141] libmachine: domain test-preload-961693 has defined MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:07:45.618385   89696 main.go:141] libmachine: domain test-preload-961693 has defined MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:07:45.618734   89696 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a7:d0:d8", ip: ""} in network mk-test-preload-961693: {Iface:virbr1 ExpiryTime:2025-10-27 21:07:40 +0000 UTC Type:0 Mac:52:54:00:a7:d0:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-961693 Clientid:01:52:54:00:a7:d0:d8}
	I1027 20:07:45.618758   89696 main.go:141] libmachine: domain test-preload-961693 has defined IP address 192.168.39.215 and MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:07:45.618808   89696 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a7:d0:d8", ip: ""} in network mk-test-preload-961693: {Iface:virbr1 ExpiryTime:2025-10-27 21:07:40 +0000 UTC Type:0 Mac:52:54:00:a7:d0:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-961693 Clientid:01:52:54:00:a7:d0:d8}
	I1027 20:07:45.618832   89696 main.go:141] libmachine: domain test-preload-961693 has defined IP address 192.168.39.215 and MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:07:45.618902   89696 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/test-preload-961693/id_rsa Username:docker}
	I1027 20:07:45.619126   89696 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/test-preload-961693/id_rsa Username:docker}
	I1027 20:07:45.699206   89696 ssh_runner.go:195] Run: systemctl --version
	I1027 20:07:45.731099   89696 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 20:07:45.878811   89696 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 20:07:45.886757   89696 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 20:07:45.886838   89696 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 20:07:45.908395   89696 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1027 20:07:45.908435   89696 start.go:495] detecting cgroup driver to use...
	I1027 20:07:45.908510   89696 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 20:07:45.930814   89696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 20:07:45.952491   89696 docker.go:218] disabling cri-docker service (if available) ...
	I1027 20:07:45.952575   89696 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 20:07:45.974006   89696 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 20:07:45.991974   89696 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 20:07:46.154219   89696 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 20:07:46.377293   89696 docker.go:234] disabling docker service ...
	I1027 20:07:46.377384   89696 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 20:07:46.395625   89696 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 20:07:46.411828   89696 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 20:07:46.579093   89696 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 20:07:46.728910   89696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 20:07:46.746527   89696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 20:07:46.772097   89696 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1027 20:07:46.772193   89696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:07:46.786379   89696 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 20:07:46.786463   89696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:07:46.800322   89696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:07:46.814231   89696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:07:46.827893   89696 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 20:07:46.842463   89696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:07:46.856986   89696 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:07:46.880995   89696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:07:46.896056   89696 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 20:07:46.907927   89696 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1027 20:07:46.908001   89696 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1027 20:07:46.929812   89696 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 20:07:46.942363   89696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:07:47.089757   89696 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 20:07:47.205560   89696 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 20:07:47.205634   89696 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 20:07:47.212139   89696 start.go:563] Will wait 60s for crictl version
	I1027 20:07:47.212238   89696 ssh_runner.go:195] Run: which crictl
	I1027 20:07:47.216893   89696 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1027 20:07:47.261907   89696 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1027 20:07:47.261990   89696 ssh_runner.go:195] Run: crio --version
	I1027 20:07:47.295820   89696 ssh_runner.go:195] Run: crio --version
	I1027 20:07:47.329782   89696 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1027 20:07:47.333408   89696 main.go:141] libmachine: domain test-preload-961693 has defined MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:07:47.333788   89696 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a7:d0:d8", ip: ""} in network mk-test-preload-961693: {Iface:virbr1 ExpiryTime:2025-10-27 21:07:40 +0000 UTC Type:0 Mac:52:54:00:a7:d0:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-961693 Clientid:01:52:54:00:a7:d0:d8}
	I1027 20:07:47.333811   89696 main.go:141] libmachine: domain test-preload-961693 has defined IP address 192.168.39.215 and MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:07:47.333968   89696 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1027 20:07:47.338895   89696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 20:07:47.354930   89696 kubeadm.go:883] updating cluster {Name:test-preload-961693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-961693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 20:07:47.355062   89696 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1027 20:07:47.355120   89696 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 20:07:47.399847   89696 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1027 20:07:47.399940   89696 ssh_runner.go:195] Run: which lz4
	I1027 20:07:47.404960   89696 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1027 20:07:47.410163   89696 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1027 20:07:47.410201   89696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1027 20:07:49.018845   89696 crio.go:462] duration metric: took 1.613925546s to copy over tarball
	I1027 20:07:49.018934   89696 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1027 20:07:50.766020   89696 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.747051809s)
	I1027 20:07:50.766077   89696 crio.go:469] duration metric: took 1.747199414s to extract the tarball
	I1027 20:07:50.766087   89696 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1027 20:07:50.807795   89696 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 20:07:50.853376   89696 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 20:07:50.853404   89696 cache_images.go:85] Images are preloaded, skipping loading
	I1027 20:07:50.853412   89696 kubeadm.go:934] updating node { 192.168.39.215 8443 v1.32.0 crio true true} ...
	I1027 20:07:50.853512   89696 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-961693 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-961693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 20:07:50.853575   89696 ssh_runner.go:195] Run: crio config
	I1027 20:07:50.901956   89696 cni.go:84] Creating CNI manager for ""
	I1027 20:07:50.901996   89696 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 20:07:50.902022   89696 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 20:07:50.902063   89696 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.215 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-961693 NodeName:test-preload-961693 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.215"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.215 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 20:07:50.902186   89696 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.215
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-961693"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.215"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.215"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 20:07:50.902259   89696 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1027 20:07:50.915289   89696 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 20:07:50.915374   89696 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 20:07:50.927826   89696 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1027 20:07:50.950136   89696 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 20:07:50.971775   89696 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1027 20:07:50.994660   89696 ssh_runner.go:195] Run: grep 192.168.39.215	control-plane.minikube.internal$ /etc/hosts
	I1027 20:07:50.999553   89696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.215	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 20:07:51.015565   89696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:07:51.157273   89696 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 20:07:51.189567   89696 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/test-preload-961693 for IP: 192.168.39.215
	I1027 20:07:51.189588   89696 certs.go:195] generating shared ca certs ...
	I1027 20:07:51.189612   89696 certs.go:227] acquiring lock for ca certs: {Name:mk3c1c890b4611f9f1a3f97b9046837227a16799 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:07:51.189772   89696 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-58821/.minikube/ca.key
	I1027 20:07:51.189808   89696 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.key
	I1027 20:07:51.189819   89696 certs.go:257] generating profile certs ...
	I1027 20:07:51.189894   89696 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/test-preload-961693/client.key
	I1027 20:07:51.190011   89696 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/test-preload-961693/apiserver.key.16abce22
	I1027 20:07:51.190089   89696 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/test-preload-961693/proxy-client.key
	I1027 20:07:51.190212   89696 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/62705.pem (1338 bytes)
	W1027 20:07:51.190248   89696 certs.go:480] ignoring /home/jenkins/minikube-integration/21801-58821/.minikube/certs/62705_empty.pem, impossibly tiny 0 bytes
	I1027 20:07:51.190258   89696 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca-key.pem (1679 bytes)
	I1027 20:07:51.190278   89696 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem (1078 bytes)
	I1027 20:07:51.190300   89696 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/cert.pem (1123 bytes)
	I1027 20:07:51.190320   89696 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/key.pem (1675 bytes)
	I1027 20:07:51.190362   89696 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/files/etc/ssl/certs/627052.pem (1708 bytes)
	I1027 20:07:51.190887   89696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 20:07:51.231300   89696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 20:07:51.274826   89696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 20:07:51.307081   89696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1027 20:07:51.339716   89696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/test-preload-961693/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1027 20:07:51.373828   89696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/test-preload-961693/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 20:07:51.405843   89696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/test-preload-961693/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 20:07:51.437959   89696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/test-preload-961693/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 20:07:51.469648   89696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/certs/62705.pem --> /usr/share/ca-certificates/62705.pem (1338 bytes)
	I1027 20:07:51.501616   89696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/files/etc/ssl/certs/627052.pem --> /usr/share/ca-certificates/627052.pem (1708 bytes)
	I1027 20:07:51.532527   89696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 20:07:51.562940   89696 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 20:07:51.584438   89696 ssh_runner.go:195] Run: openssl version
	I1027 20:07:51.591099   89696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/62705.pem && ln -fs /usr/share/ca-certificates/62705.pem /etc/ssl/certs/62705.pem"
	I1027 20:07:51.604413   89696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/62705.pem
	I1027 20:07:51.609978   89696 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:09 /usr/share/ca-certificates/62705.pem
	I1027 20:07:51.610086   89696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/62705.pem
	I1027 20:07:51.618087   89696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/62705.pem /etc/ssl/certs/51391683.0"
	I1027 20:07:51.631560   89696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/627052.pem && ln -fs /usr/share/ca-certificates/627052.pem /etc/ssl/certs/627052.pem"
	I1027 20:07:51.644849   89696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/627052.pem
	I1027 20:07:51.650417   89696 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:09 /usr/share/ca-certificates/627052.pem
	I1027 20:07:51.650494   89696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/627052.pem
	I1027 20:07:51.658167   89696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/627052.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 20:07:51.672337   89696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 20:07:51.685871   89696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:07:51.691389   89696 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:07:51.691458   89696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:07:51.699066   89696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 20:07:51.713271   89696 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 20:07:51.718773   89696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 20:07:51.726499   89696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 20:07:51.734321   89696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 20:07:51.742592   89696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 20:07:51.750505   89696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 20:07:51.758148   89696 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 20:07:51.766063   89696 kubeadm.go:400] StartCluster: {Name:test-preload-961693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-961693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:07:51.766145   89696 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 20:07:51.766230   89696 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 20:07:51.807681   89696 cri.go:89] found id: ""
	I1027 20:07:51.807760   89696 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 20:07:51.820507   89696 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1027 20:07:51.820527   89696 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1027 20:07:51.820574   89696 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 20:07:51.833487   89696 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 20:07:51.833954   89696 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-961693" does not appear in /home/jenkins/minikube-integration/21801-58821/kubeconfig
	I1027 20:07:51.834126   89696 kubeconfig.go:62] /home/jenkins/minikube-integration/21801-58821/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-961693" cluster setting kubeconfig missing "test-preload-961693" context setting]
	I1027 20:07:51.834402   89696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/kubeconfig: {Name:mk90c4d883178b7191d62a8cd99434bc24dd555f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:07:51.834913   89696 kapi.go:59] client config for test-preload-961693: &rest.Config{Host:"https://192.168.39.215:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21801-58821/.minikube/profiles/test-preload-961693/client.crt", KeyFile:"/home/jenkins/minikube-integration/21801-58821/.minikube/profiles/test-preload-961693/client.key", CAFile:"/home/jenkins/minikube-integration/21801-58821/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c4e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1027 20:07:51.835433   89696 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1027 20:07:51.835450   89696 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1027 20:07:51.835457   89696 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1027 20:07:51.835463   89696 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1027 20:07:51.835469   89696 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1027 20:07:51.835792   89696 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 20:07:51.848418   89696 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.39.215
	I1027 20:07:51.848457   89696 kubeadm.go:1160] stopping kube-system containers ...
	I1027 20:07:51.848474   89696 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1027 20:07:51.848531   89696 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 20:07:51.892685   89696 cri.go:89] found id: ""
	I1027 20:07:51.892770   89696 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1027 20:07:51.918691   89696 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 20:07:51.932577   89696 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 20:07:51.932598   89696 kubeadm.go:157] found existing configuration files:
	
	I1027 20:07:51.932647   89696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 20:07:51.945117   89696 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 20:07:51.945194   89696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 20:07:51.958564   89696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 20:07:51.970077   89696 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 20:07:51.970132   89696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 20:07:51.983090   89696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 20:07:51.994761   89696 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 20:07:51.994824   89696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 20:07:52.007557   89696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 20:07:52.019844   89696 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 20:07:52.019928   89696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 20:07:52.033089   89696 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 20:07:52.045919   89696 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1027 20:07:52.105104   89696 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1027 20:07:52.997602   89696 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1027 20:07:53.265170   89696 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1027 20:07:53.343914   89696 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1027 20:07:53.443255   89696 api_server.go:52] waiting for apiserver process to appear ...
	I1027 20:07:53.443346   89696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 20:07:53.944354   89696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 20:07:54.444355   89696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 20:07:54.944296   89696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 20:07:55.443839   89696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 20:07:55.943616   89696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 20:07:55.968104   89696 api_server.go:72] duration metric: took 2.524856695s to wait for apiserver process to appear ...
	I1027 20:07:55.968136   89696 api_server.go:88] waiting for apiserver healthz status ...
	I1027 20:07:55.968161   89696 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8443/healthz ...
	I1027 20:07:58.530657   89696 api_server.go:279] https://192.168.39.215:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1027 20:07:58.530688   89696 api_server.go:103] status: https://192.168.39.215:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1027 20:07:58.530706   89696 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8443/healthz ...
	I1027 20:07:58.546330   89696 api_server.go:279] https://192.168.39.215:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1027 20:07:58.546363   89696 api_server.go:103] status: https://192.168.39.215:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1027 20:07:58.969090   89696 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8443/healthz ...
	I1027 20:07:58.974473   89696 api_server.go:279] https://192.168.39.215:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 20:07:58.974511   89696 api_server.go:103] status: https://192.168.39.215:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 20:07:59.469226   89696 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8443/healthz ...
	I1027 20:07:59.477768   89696 api_server.go:279] https://192.168.39.215:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 20:07:59.477823   89696 api_server.go:103] status: https://192.168.39.215:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 20:07:59.968470   89696 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8443/healthz ...
	I1027 20:07:59.973958   89696 api_server.go:279] https://192.168.39.215:8443/healthz returned 200:
	ok
	I1027 20:07:59.982731   89696 api_server.go:141] control plane version: v1.32.0
	I1027 20:07:59.982763   89696 api_server.go:131] duration metric: took 4.014618164s to wait for apiserver health ...
	I1027 20:07:59.982773   89696 cni.go:84] Creating CNI manager for ""
	I1027 20:07:59.982779   89696 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 20:07:59.984223   89696 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1027 20:07:59.985566   89696 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1027 20:08:00.014255   89696 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1027 20:08:00.060940   89696 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 20:08:00.068411   89696 system_pods.go:59] 7 kube-system pods found
	I1027 20:08:00.068462   89696 system_pods.go:61] "coredns-668d6bf9bc-lnhq6" [18552cbc-74a9-427a-a871-8c7e1da26a73] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:08:00.068480   89696 system_pods.go:61] "etcd-test-preload-961693" [4290e931-3d7b-44a9-bf67-eaedea02e151] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 20:08:00.068491   89696 system_pods.go:61] "kube-apiserver-test-preload-961693" [b6ce50d7-3311-4d88-80ac-06d0b101183f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 20:08:00.068498   89696 system_pods.go:61] "kube-controller-manager-test-preload-961693" [236a0d2e-69ae-4615-b586-63eca7da7c0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 20:08:00.068505   89696 system_pods.go:61] "kube-proxy-zgsbw" [001949f9-6828-4e36-a92b-0b8e41869ea1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1027 20:08:00.068513   89696 system_pods.go:61] "kube-scheduler-test-preload-961693" [a96f4a11-e415-4895-bee9-9d8c217b6d7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 20:08:00.068521   89696 system_pods.go:61] "storage-provisioner" [1e83cd2d-5f98-4bc3-9fbf-db72b7bf2774] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 20:08:00.068528   89696 system_pods.go:74] duration metric: took 7.549497ms to wait for pod list to return data ...
	I1027 20:08:00.068538   89696 node_conditions.go:102] verifying NodePressure condition ...
	I1027 20:08:00.076237   89696 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1027 20:08:00.076282   89696 node_conditions.go:123] node cpu capacity is 2
	I1027 20:08:00.076298   89696 node_conditions.go:105] duration metric: took 7.753039ms to run NodePressure ...
	I1027 20:08:00.076367   89696 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1027 20:08:00.357665   89696 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1027 20:08:00.361824   89696 kubeadm.go:743] kubelet initialised
	I1027 20:08:00.361847   89696 kubeadm.go:744] duration metric: took 4.152953ms waiting for restarted kubelet to initialise ...
	I1027 20:08:00.361866   89696 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 20:08:00.378244   89696 ops.go:34] apiserver oom_adj: -16
	I1027 20:08:00.378270   89696 kubeadm.go:601] duration metric: took 8.557737197s to restartPrimaryControlPlane
	I1027 20:08:00.378282   89696 kubeadm.go:402] duration metric: took 8.612254107s to StartCluster
	I1027 20:08:00.378306   89696 settings.go:142] acquiring lock: {Name:mk19a39086427cb47b9bb78fd0b5176c91a751d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:08:00.378501   89696 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-58821/kubeconfig
	I1027 20:08:00.379111   89696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/kubeconfig: {Name:mk90c4d883178b7191d62a8cd99434bc24dd555f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:08:00.379389   89696 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 20:08:00.379462   89696 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 20:08:00.379566   89696 addons.go:69] Setting storage-provisioner=true in profile "test-preload-961693"
	I1027 20:08:00.379589   89696 addons.go:238] Setting addon storage-provisioner=true in "test-preload-961693"
	W1027 20:08:00.379602   89696 addons.go:247] addon storage-provisioner should already be in state true
	I1027 20:08:00.379603   89696 config.go:182] Loaded profile config "test-preload-961693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1027 20:08:00.379619   89696 addons.go:69] Setting default-storageclass=true in profile "test-preload-961693"
	I1027 20:08:00.379633   89696 host.go:66] Checking if "test-preload-961693" exists ...
	I1027 20:08:00.379650   89696 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-961693"
	I1027 20:08:00.381326   89696 out.go:179] * Verifying Kubernetes components...
	I1027 20:08:00.381861   89696 kapi.go:59] client config for test-preload-961693: &rest.Config{Host:"https://192.168.39.215:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21801-58821/.minikube/profiles/test-preload-961693/client.crt", KeyFile:"/home/jenkins/minikube-integration/21801-58821/.minikube/profiles/test-preload-961693/client.key", CAFile:"/home/jenkins/minikube-integration/21801-58821/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c4e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1027 20:08:00.382179   89696 addons.go:238] Setting addon default-storageclass=true in "test-preload-961693"
	W1027 20:08:00.382195   89696 addons.go:247] addon default-storageclass should already be in state true
	I1027 20:08:00.382214   89696 host.go:66] Checking if "test-preload-961693" exists ...
	I1027 20:08:00.382874   89696 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 20:08:00.382923   89696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:08:00.383644   89696 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 20:08:00.383660   89696 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 20:08:00.384274   89696 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 20:08:00.384298   89696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 20:08:00.386332   89696 main.go:141] libmachine: domain test-preload-961693 has defined MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:08:00.386761   89696 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a7:d0:d8", ip: ""} in network mk-test-preload-961693: {Iface:virbr1 ExpiryTime:2025-10-27 21:07:40 +0000 UTC Type:0 Mac:52:54:00:a7:d0:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-961693 Clientid:01:52:54:00:a7:d0:d8}
	I1027 20:08:00.386793   89696 main.go:141] libmachine: domain test-preload-961693 has defined IP address 192.168.39.215 and MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:08:00.386862   89696 main.go:141] libmachine: domain test-preload-961693 has defined MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:08:00.387069   89696 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/test-preload-961693/id_rsa Username:docker}
	I1027 20:08:00.387461   89696 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a7:d0:d8", ip: ""} in network mk-test-preload-961693: {Iface:virbr1 ExpiryTime:2025-10-27 21:07:40 +0000 UTC Type:0 Mac:52:54:00:a7:d0:d8 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-961693 Clientid:01:52:54:00:a7:d0:d8}
	I1027 20:08:00.387488   89696 main.go:141] libmachine: domain test-preload-961693 has defined IP address 192.168.39.215 and MAC address 52:54:00:a7:d0:d8 in network mk-test-preload-961693
	I1027 20:08:00.387638   89696 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/test-preload-961693/id_rsa Username:docker}
	I1027 20:08:00.651613   89696 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 20:08:00.689944   89696 node_ready.go:35] waiting up to 6m0s for node "test-preload-961693" to be "Ready" ...
	I1027 20:08:00.828600   89696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 20:08:00.833798   89696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 20:08:01.532282   89696 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1027 20:08:01.533593   89696 addons.go:514] duration metric: took 1.154124978s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1027 20:08:02.694821   89696 node_ready.go:57] node "test-preload-961693" has "Ready":"False" status (will retry)
	W1027 20:08:05.193571   89696 node_ready.go:57] node "test-preload-961693" has "Ready":"False" status (will retry)
	W1027 20:08:07.194467   89696 node_ready.go:57] node "test-preload-961693" has "Ready":"False" status (will retry)
	I1027 20:08:09.193464   89696 node_ready.go:49] node "test-preload-961693" is "Ready"
	I1027 20:08:09.193511   89696 node_ready.go:38] duration metric: took 8.50347812s for node "test-preload-961693" to be "Ready" ...
	I1027 20:08:09.193529   89696 api_server.go:52] waiting for apiserver process to appear ...
	I1027 20:08:09.193593   89696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 20:08:09.215404   89696 api_server.go:72] duration metric: took 8.835977141s to wait for apiserver process to appear ...
	I1027 20:08:09.215438   89696 api_server.go:88] waiting for apiserver healthz status ...
	I1027 20:08:09.215466   89696 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8443/healthz ...
	I1027 20:08:09.219900   89696 api_server.go:279] https://192.168.39.215:8443/healthz returned 200:
	ok
	I1027 20:08:09.220841   89696 api_server.go:141] control plane version: v1.32.0
	I1027 20:08:09.220861   89696 api_server.go:131] duration metric: took 5.415786ms to wait for apiserver health ...
	I1027 20:08:09.220872   89696 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 20:08:09.224494   89696 system_pods.go:59] 7 kube-system pods found
	I1027 20:08:09.224520   89696 system_pods.go:61] "coredns-668d6bf9bc-lnhq6" [18552cbc-74a9-427a-a871-8c7e1da26a73] Running
	I1027 20:08:09.224525   89696 system_pods.go:61] "etcd-test-preload-961693" [4290e931-3d7b-44a9-bf67-eaedea02e151] Running
	I1027 20:08:09.224529   89696 system_pods.go:61] "kube-apiserver-test-preload-961693" [b6ce50d7-3311-4d88-80ac-06d0b101183f] Running
	I1027 20:08:09.224538   89696 system_pods.go:61] "kube-controller-manager-test-preload-961693" [236a0d2e-69ae-4615-b586-63eca7da7c0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 20:08:09.224543   89696 system_pods.go:61] "kube-proxy-zgsbw" [001949f9-6828-4e36-a92b-0b8e41869ea1] Running
	I1027 20:08:09.224557   89696 system_pods.go:61] "kube-scheduler-test-preload-961693" [a96f4a11-e415-4895-bee9-9d8c217b6d7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 20:08:09.224562   89696 system_pods.go:61] "storage-provisioner" [1e83cd2d-5f98-4bc3-9fbf-db72b7bf2774] Running
	I1027 20:08:09.224570   89696 system_pods.go:74] duration metric: took 3.690438ms to wait for pod list to return data ...
	I1027 20:08:09.224584   89696 default_sa.go:34] waiting for default service account to be created ...
	I1027 20:08:09.227496   89696 default_sa.go:45] found service account: "default"
	I1027 20:08:09.227519   89696 default_sa.go:55] duration metric: took 2.928172ms for default service account to be created ...
	I1027 20:08:09.227527   89696 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 20:08:09.232302   89696 system_pods.go:86] 7 kube-system pods found
	I1027 20:08:09.232329   89696 system_pods.go:89] "coredns-668d6bf9bc-lnhq6" [18552cbc-74a9-427a-a871-8c7e1da26a73] Running
	I1027 20:08:09.232335   89696 system_pods.go:89] "etcd-test-preload-961693" [4290e931-3d7b-44a9-bf67-eaedea02e151] Running
	I1027 20:08:09.232339   89696 system_pods.go:89] "kube-apiserver-test-preload-961693" [b6ce50d7-3311-4d88-80ac-06d0b101183f] Running
	I1027 20:08:09.232348   89696 system_pods.go:89] "kube-controller-manager-test-preload-961693" [236a0d2e-69ae-4615-b586-63eca7da7c0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 20:08:09.232351   89696 system_pods.go:89] "kube-proxy-zgsbw" [001949f9-6828-4e36-a92b-0b8e41869ea1] Running
	I1027 20:08:09.232358   89696 system_pods.go:89] "kube-scheduler-test-preload-961693" [a96f4a11-e415-4895-bee9-9d8c217b6d7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 20:08:09.232361   89696 system_pods.go:89] "storage-provisioner" [1e83cd2d-5f98-4bc3-9fbf-db72b7bf2774] Running
	I1027 20:08:09.232368   89696 system_pods.go:126] duration metric: took 4.83611ms to wait for k8s-apps to be running ...
	I1027 20:08:09.232377   89696 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 20:08:09.232426   89696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 20:08:09.257558   89696 system_svc.go:56] duration metric: took 25.166563ms WaitForService to wait for kubelet
	I1027 20:08:09.257591   89696 kubeadm.go:586] duration metric: took 8.878175741s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 20:08:09.257608   89696 node_conditions.go:102] verifying NodePressure condition ...
	I1027 20:08:09.260775   89696 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1027 20:08:09.260797   89696 node_conditions.go:123] node cpu capacity is 2
	I1027 20:08:09.260809   89696 node_conditions.go:105] duration metric: took 3.196029ms to run NodePressure ...
	I1027 20:08:09.260820   89696 start.go:241] waiting for startup goroutines ...
	I1027 20:08:09.260827   89696 start.go:246] waiting for cluster config update ...
	I1027 20:08:09.260837   89696 start.go:255] writing updated cluster config ...
	I1027 20:08:09.261134   89696 ssh_runner.go:195] Run: rm -f paused
	I1027 20:08:09.267051   89696 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 20:08:09.267653   89696 kapi.go:59] client config for test-preload-961693: &rest.Config{Host:"https://192.168.39.215:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21801-58821/.minikube/profiles/test-preload-961693/client.crt", KeyFile:"/home/jenkins/minikube-integration/21801-58821/.minikube/profiles/test-preload-961693/client.key", CAFile:"/home/jenkins/minikube-integration/21801-58821/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c4e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1027 20:08:09.270648   89696 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-lnhq6" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:08:09.275325   89696 pod_ready.go:94] pod "coredns-668d6bf9bc-lnhq6" is "Ready"
	I1027 20:08:09.275352   89696 pod_ready.go:86] duration metric: took 4.679244ms for pod "coredns-668d6bf9bc-lnhq6" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:08:09.278015   89696 pod_ready.go:83] waiting for pod "etcd-test-preload-961693" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:08:09.283309   89696 pod_ready.go:94] pod "etcd-test-preload-961693" is "Ready"
	I1027 20:08:09.283347   89696 pod_ready.go:86] duration metric: took 5.285105ms for pod "etcd-test-preload-961693" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:08:09.286450   89696 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-961693" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:08:09.291401   89696 pod_ready.go:94] pod "kube-apiserver-test-preload-961693" is "Ready"
	I1027 20:08:09.291431   89696 pod_ready.go:86] duration metric: took 4.943565ms for pod "kube-apiserver-test-preload-961693" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:08:09.294220   89696 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-961693" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 20:08:11.300173   89696 pod_ready.go:104] pod "kube-controller-manager-test-preload-961693" is not "Ready", error: <nil>
	I1027 20:08:12.801506   89696 pod_ready.go:94] pod "kube-controller-manager-test-preload-961693" is "Ready"
	I1027 20:08:12.801537   89696 pod_ready.go:86] duration metric: took 3.507296054s for pod "kube-controller-manager-test-preload-961693" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:08:12.803890   89696 pod_ready.go:83] waiting for pod "kube-proxy-zgsbw" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:08:13.071627   89696 pod_ready.go:94] pod "kube-proxy-zgsbw" is "Ready"
	I1027 20:08:13.071656   89696 pod_ready.go:86] duration metric: took 267.741712ms for pod "kube-proxy-zgsbw" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:08:13.272555   89696 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-961693" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:08:13.672295   89696 pod_ready.go:94] pod "kube-scheduler-test-preload-961693" is "Ready"
	I1027 20:08:13.672322   89696 pod_ready.go:86] duration metric: took 399.738923ms for pod "kube-scheduler-test-preload-961693" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:08:13.672334   89696 pod_ready.go:40] duration metric: took 4.405251575s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 20:08:13.715394   89696 start.go:624] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I1027 20:08:13.717343   89696 out.go:203] 
	W1027 20:08:13.718833   89696 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I1027 20:08:13.720136   89696 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1027 20:08:13.721500   89696 out.go:179] * Done! kubectl is now configured to use "test-preload-961693" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 20:08:14 test-preload-961693 crio[842]: time="2025-10-27 20:08:14.565716734Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=45734837-4abc-4c0d-87f8-7d5e23579873 name=/runtime.v1.RuntimeService/Version
	Oct 27 20:08:14 test-preload-961693 crio[842]: time="2025-10-27 20:08:14.566892740Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=13cfbadd-d620-4191-882d-1870f9d74f0c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 20:08:14 test-preload-961693 crio[842]: time="2025-10-27 20:08:14.567300867Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761595694567279007,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=13cfbadd-d620-4191-882d-1870f9d74f0c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 20:08:14 test-preload-961693 crio[842]: time="2025-10-27 20:08:14.568177735Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9f0d1901-c2e1-4393-a1d7-09612371d391 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 20:08:14 test-preload-961693 crio[842]: time="2025-10-27 20:08:14.568233214Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9f0d1901-c2e1-4393-a1d7-09612371d391 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 20:08:14 test-preload-961693 crio[842]: time="2025-10-27 20:08:14.568388370Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bce945718b2f0aad588867e0d38fa2b102f76630b15ee4cd466a212820158803,PodSandboxId:2462cfa39e60f088328ba2fc24b4f3a9a11a37cd3b8e973db01e8cb4de0f6a21,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761595687413363061,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-lnhq6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18552cbc-74a9-427a-a871-8c7e1da26a73,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:941655f4452bab1372a1fee266df0cacba7b9982bef6b4209abe982059ef12ef,PodSandboxId:ed56e297d4d49c0c7687e616746015ddc520bead10765fb856e47881f9f1bd27,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761595679855143711,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgsbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 001949f9-6828-4e36-a92b-0b8e41869ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e6893ab2da7f1c04c78c14852259f2eed5b74d1cb4c7a807bc6c0a3c38166c5,PodSandboxId:20c31a9c7ed1cc2a96f362c39d649611239eae63dcc4548c1847c0edce7be4ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761595679822782104,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e
83cd2d-5f98-4bc3-9fbf-db72b7bf2774,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:674d3dbe2fb4a985089401e31835b75799b2c2492b1df23478e76562b9c7ea5e,PodSandboxId:488303cc96d8ef864aab23168197008125da55b771fa7f39a4f13900d8a2f3b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761595675344849652,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-961693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 598e2981c
647f51022ba83ac0ad769f4,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb629379a8e33dc92c7969f54ea7036df976da2050b817ea3ab0426799bb81e0,PodSandboxId:e3f627510e00d7b638da96518eb5fe1782fe12b35c678926efe9ee0c22dc2cce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761595675331453472,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-961693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 282891c1e62b6bdaa9de1ad897d69f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47c9d690e076f0dd97ba01e9986285a22c8baab2ec1dcc97448bc896d29c7e0a,PodSandboxId:edec22b2ae5009e96871db9431c1e110f20c24213205682b2493a93bd8d873aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761595675269979357,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-961693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e68
3fff75a68787009abdd74984a6a4,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94c132836f4404ff3fce09baa12790cf252aabb4485554afc491cd13c3ce0beb,PodSandboxId:49aa136c820aebe40b3644c845feca2a526a9ab5175d7b8f13e8b897f8f8f815,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761595675245688657,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-961693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56bf2b37884286db6942c397342391e0,},Annotation
s:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9f0d1901-c2e1-4393-a1d7-09612371d391 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 20:08:14 test-preload-961693 crio[842]: time="2025-10-27 20:08:14.570150327Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=b0bc2d63-0f08-40b8-84f0-e66a52d8529f name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 27 20:08:14 test-preload-961693 crio[842]: time="2025-10-27 20:08:14.570332267Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:2462cfa39e60f088328ba2fc24b4f3a9a11a37cd3b8e973db01e8cb4de0f6a21,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-lnhq6,Uid:18552cbc-74a9-427a-a871-8c7e1da26a73,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761595687168733152,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-lnhq6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18552cbc-74a9-427a-a871-8c7e1da26a73,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T20:07:59.311224327Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ed56e297d4d49c0c7687e616746015ddc520bead10765fb856e47881f9f1bd27,Metadata:&PodSandboxMetadata{Name:kube-proxy-zgsbw,Uid:001949f9-6828-4e36-a92b-0b8e41869ea1,Namespace:kube-system,A
ttempt:0,},State:SANDBOX_READY,CreatedAt:1761595679627846391,Labels:map[string]string{controller-revision-hash: 64b9dbc74b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-zgsbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001949f9-6828-4e36-a92b-0b8e41869ea1,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T20:07:59.311219872Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:20c31a9c7ed1cc2a96f362c39d649611239eae63dcc4548c1847c0edce7be4ad,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1e83cd2d-5f98-4bc3-9fbf-db72b7bf2774,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761595679625376925,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e83cd2d-5f98-4bc3-9fbf-db72
b7bf2774,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-10-27T20:07:59.311223061Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:edec22b2ae5009e96871db9431c1e110f20c24213205682b2493a93bd8d873aa,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-961693,Uid:4e683ff
f75a68787009abdd74984a6a4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761595675052654484,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-961693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e683fff75a68787009abdd74984a6a4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4e683fff75a68787009abdd74984a6a4,kubernetes.io/config.seen: 2025-10-27T20:07:53.308505064Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:488303cc96d8ef864aab23168197008125da55b771fa7f39a4f13900d8a2f3b6,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-961693,Uid:598e2981c647f51022ba83ac0ad769f4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761595675048970965,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-961693,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 598e2981c647f51022ba83ac0ad769f4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.215:8443,kubernetes.io/config.hash: 598e2981c647f51022ba83ac0ad769f4,kubernetes.io/config.seen: 2025-10-27T20:07:53.308506757Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:49aa136c820aebe40b3644c845feca2a526a9ab5175d7b8f13e8b897f8f8f815,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-961693,Uid:56bf2b37884286db6942c397342391e0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761595675042828999,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-961693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56bf2b37884286db6942c397342391e0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.215:2379,kubernetes.io/config.hash: 56bf2b37884286d
b6942c397342391e0,kubernetes.io/config.seen: 2025-10-27T20:07:53.391782100Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e3f627510e00d7b638da96518eb5fe1782fe12b35c678926efe9ee0c22dc2cce,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-961693,Uid:282891c1e62b6bdaa9de1ad897d69f4a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761595675041123308,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-961693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 282891c1e62b6bdaa9de1ad897d69f4a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 282891c1e62b6bdaa9de1ad897d69f4a,kubernetes.io/config.seen: 2025-10-27T20:07:53.308501092Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=b0bc2d63-0f08-40b8-84f0-e66a52d8529f name=/runtime.v1.RuntimeService/ListPodSandbox

                                                
                                                
	Oct 27 20:08:14 test-preload-961693 crio[842]: time="2025-10-27 20:08:14.571004206Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a14b2209-c7fa-4c27-ad0a-f6414738f1d2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 20:08:14 test-preload-961693 crio[842]: time="2025-10-27 20:08:14.571070663Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a14b2209-c7fa-4c27-ad0a-f6414738f1d2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 20:08:14 test-preload-961693 crio[842]: time="2025-10-27 20:08:14.571221742Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bce945718b2f0aad588867e0d38fa2b102f76630b15ee4cd466a212820158803,PodSandboxId:2462cfa39e60f088328ba2fc24b4f3a9a11a37cd3b8e973db01e8cb4de0f6a21,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761595687413363061,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-lnhq6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18552cbc-74a9-427a-a871-8c7e1da26a73,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:941655f4452bab1372a1fee266df0cacba7b9982bef6b4209abe982059ef12ef,PodSandboxId:ed56e297d4d49c0c7687e616746015ddc520bead10765fb856e47881f9f1bd27,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761595679855143711,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgsbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 001949f9-6828-4e36-a92b-0b8e41869ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e6893ab2da7f1c04c78c14852259f2eed5b74d1cb4c7a807bc6c0a3c38166c5,PodSandboxId:20c31a9c7ed1cc2a96f362c39d649611239eae63dcc4548c1847c0edce7be4ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761595679822782104,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e
83cd2d-5f98-4bc3-9fbf-db72b7bf2774,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:674d3dbe2fb4a985089401e31835b75799b2c2492b1df23478e76562b9c7ea5e,PodSandboxId:488303cc96d8ef864aab23168197008125da55b771fa7f39a4f13900d8a2f3b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761595675344849652,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-961693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 598e2981c
647f51022ba83ac0ad769f4,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb629379a8e33dc92c7969f54ea7036df976da2050b817ea3ab0426799bb81e0,PodSandboxId:e3f627510e00d7b638da96518eb5fe1782fe12b35c678926efe9ee0c22dc2cce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761595675331453472,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-961693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 282891c1e62b6bdaa9de1ad897d69f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47c9d690e076f0dd97ba01e9986285a22c8baab2ec1dcc97448bc896d29c7e0a,PodSandboxId:edec22b2ae5009e96871db9431c1e110f20c24213205682b2493a93bd8d873aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761595675269979357,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-961693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e68
3fff75a68787009abdd74984a6a4,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94c132836f4404ff3fce09baa12790cf252aabb4485554afc491cd13c3ce0beb,PodSandboxId:49aa136c820aebe40b3644c845feca2a526a9ab5175d7b8f13e8b897f8f8f815,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761595675245688657,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-961693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56bf2b37884286db6942c397342391e0,},Annotation
s:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a14b2209-c7fa-4c27-ad0a-f6414738f1d2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 20:08:14 test-preload-961693 crio[842]: time="2025-10-27 20:08:14.612214514Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cd970c51-ea99-4e5d-b076-20fa60286e26 name=/runtime.v1.RuntimeService/Version
	Oct 27 20:08:14 test-preload-961693 crio[842]: time="2025-10-27 20:08:14.612327952Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cd970c51-ea99-4e5d-b076-20fa60286e26 name=/runtime.v1.RuntimeService/Version
	Oct 27 20:08:14 test-preload-961693 crio[842]: time="2025-10-27 20:08:14.613914813Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5d44c36d-05be-47d7-aefc-6ef9535f43d5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 20:08:14 test-preload-961693 crio[842]: time="2025-10-27 20:08:14.614422605Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761595694614399666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5d44c36d-05be-47d7-aefc-6ef9535f43d5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 20:08:14 test-preload-961693 crio[842]: time="2025-10-27 20:08:14.615042469Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d3740dcf-e3fa-4c48-ace9-acf2df59e986 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 20:08:14 test-preload-961693 crio[842]: time="2025-10-27 20:08:14.615090610Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d3740dcf-e3fa-4c48-ace9-acf2df59e986 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 20:08:14 test-preload-961693 crio[842]: time="2025-10-27 20:08:14.615243739Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bce945718b2f0aad588867e0d38fa2b102f76630b15ee4cd466a212820158803,PodSandboxId:2462cfa39e60f088328ba2fc24b4f3a9a11a37cd3b8e973db01e8cb4de0f6a21,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761595687413363061,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-lnhq6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18552cbc-74a9-427a-a871-8c7e1da26a73,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:941655f4452bab1372a1fee266df0cacba7b9982bef6b4209abe982059ef12ef,PodSandboxId:ed56e297d4d49c0c7687e616746015ddc520bead10765fb856e47881f9f1bd27,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761595679855143711,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgsbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 001949f9-6828-4e36-a92b-0b8e41869ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e6893ab2da7f1c04c78c14852259f2eed5b74d1cb4c7a807bc6c0a3c38166c5,PodSandboxId:20c31a9c7ed1cc2a96f362c39d649611239eae63dcc4548c1847c0edce7be4ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761595679822782104,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e
83cd2d-5f98-4bc3-9fbf-db72b7bf2774,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:674d3dbe2fb4a985089401e31835b75799b2c2492b1df23478e76562b9c7ea5e,PodSandboxId:488303cc96d8ef864aab23168197008125da55b771fa7f39a4f13900d8a2f3b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761595675344849652,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-961693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 598e2981c
647f51022ba83ac0ad769f4,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb629379a8e33dc92c7969f54ea7036df976da2050b817ea3ab0426799bb81e0,PodSandboxId:e3f627510e00d7b638da96518eb5fe1782fe12b35c678926efe9ee0c22dc2cce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761595675331453472,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-961693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 282891c1e62b6bdaa9de1ad897d69f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47c9d690e076f0dd97ba01e9986285a22c8baab2ec1dcc97448bc896d29c7e0a,PodSandboxId:edec22b2ae5009e96871db9431c1e110f20c24213205682b2493a93bd8d873aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761595675269979357,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-961693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e68
3fff75a68787009abdd74984a6a4,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94c132836f4404ff3fce09baa12790cf252aabb4485554afc491cd13c3ce0beb,PodSandboxId:49aa136c820aebe40b3644c845feca2a526a9ab5175d7b8f13e8b897f8f8f815,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761595675245688657,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-961693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56bf2b37884286db6942c397342391e0,},Annotation
s:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d3740dcf-e3fa-4c48-ace9-acf2df59e986 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 20:08:14 test-preload-961693 crio[842]: time="2025-10-27 20:08:14.653835148Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c6222ce2-6f82-4285-8618-87a48f869e54 name=/runtime.v1.RuntimeService/Version
	Oct 27 20:08:14 test-preload-961693 crio[842]: time="2025-10-27 20:08:14.653952146Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c6222ce2-6f82-4285-8618-87a48f869e54 name=/runtime.v1.RuntimeService/Version
	Oct 27 20:08:14 test-preload-961693 crio[842]: time="2025-10-27 20:08:14.655379563Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1b8680ab-abe5-4f5e-8084-e392c9881d18 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 20:08:14 test-preload-961693 crio[842]: time="2025-10-27 20:08:14.656066574Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761595694656042873,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1b8680ab-abe5-4f5e-8084-e392c9881d18 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 20:08:14 test-preload-961693 crio[842]: time="2025-10-27 20:08:14.656802713Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=80a00f74-0277-4995-9aeb-f127d637d9c4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 20:08:14 test-preload-961693 crio[842]: time="2025-10-27 20:08:14.656904986Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=80a00f74-0277-4995-9aeb-f127d637d9c4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 20:08:14 test-preload-961693 crio[842]: time="2025-10-27 20:08:14.657079508Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bce945718b2f0aad588867e0d38fa2b102f76630b15ee4cd466a212820158803,PodSandboxId:2462cfa39e60f088328ba2fc24b4f3a9a11a37cd3b8e973db01e8cb4de0f6a21,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761595687413363061,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-lnhq6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18552cbc-74a9-427a-a871-8c7e1da26a73,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:941655f4452bab1372a1fee266df0cacba7b9982bef6b4209abe982059ef12ef,PodSandboxId:ed56e297d4d49c0c7687e616746015ddc520bead10765fb856e47881f9f1bd27,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761595679855143711,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zgsbw,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 001949f9-6828-4e36-a92b-0b8e41869ea1,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e6893ab2da7f1c04c78c14852259f2eed5b74d1cb4c7a807bc6c0a3c38166c5,PodSandboxId:20c31a9c7ed1cc2a96f362c39d649611239eae63dcc4548c1847c0edce7be4ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761595679822782104,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e
83cd2d-5f98-4bc3-9fbf-db72b7bf2774,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:674d3dbe2fb4a985089401e31835b75799b2c2492b1df23478e76562b9c7ea5e,PodSandboxId:488303cc96d8ef864aab23168197008125da55b771fa7f39a4f13900d8a2f3b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761595675344849652,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-961693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 598e2981c
647f51022ba83ac0ad769f4,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb629379a8e33dc92c7969f54ea7036df976da2050b817ea3ab0426799bb81e0,PodSandboxId:e3f627510e00d7b638da96518eb5fe1782fe12b35c678926efe9ee0c22dc2cce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761595675331453472,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-961693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 282891c1e62b6bdaa9de1ad897d69f4a,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47c9d690e076f0dd97ba01e9986285a22c8baab2ec1dcc97448bc896d29c7e0a,PodSandboxId:edec22b2ae5009e96871db9431c1e110f20c24213205682b2493a93bd8d873aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761595675269979357,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-961693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e68
3fff75a68787009abdd74984a6a4,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94c132836f4404ff3fce09baa12790cf252aabb4485554afc491cd13c3ce0beb,PodSandboxId:49aa136c820aebe40b3644c845feca2a526a9ab5175d7b8f13e8b897f8f8f815,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761595675245688657,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-961693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56bf2b37884286db6942c397342391e0,},Annotation
s:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=80a00f74-0277-4995-9aeb-f127d637d9c4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bce945718b2f0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   7 seconds ago       Running             coredns                   1                   2462cfa39e60f       coredns-668d6bf9bc-lnhq6
	941655f4452ba       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   14 seconds ago      Running             kube-proxy                1                   ed56e297d4d49       kube-proxy-zgsbw
	4e6893ab2da7f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   20c31a9c7ed1c       storage-provisioner
	674d3dbe2fb4a       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   19 seconds ago      Running             kube-apiserver            1                   488303cc96d8e       kube-apiserver-test-preload-961693
	cb629379a8e33       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   19 seconds ago      Running             kube-controller-manager   1                   e3f627510e00d       kube-controller-manager-test-preload-961693
	47c9d690e076f       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   19 seconds ago      Running             kube-scheduler            1                   edec22b2ae500       kube-scheduler-test-preload-961693
	94c132836f440       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   19 seconds ago      Running             etcd                      1                   49aa136c820ae       etcd-test-preload-961693
	
	
	==> coredns [bce945718b2f0aad588867e0d38fa2b102f76630b15ee4cd466a212820158803] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:36999 - 3116 "HINFO IN 5025919203990329014.606418183287352354. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.021472777s
	
	
	==> describe nodes <==
	Name:               test-preload-961693
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-961693
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=test-preload-961693
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T20_06_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 20:06:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-961693
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 20:08:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 20:08:09 +0000   Mon, 27 Oct 2025 20:06:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 20:08:09 +0000   Mon, 27 Oct 2025 20:06:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 20:08:09 +0000   Mon, 27 Oct 2025 20:06:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 20:08:09 +0000   Mon, 27 Oct 2025 20:08:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.215
	  Hostname:    test-preload-961693
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 af1f9cb13df14942add61a7198b6a02f
	  System UUID:                af1f9cb1-3df1-4942-add6-1a7198b6a02f
	  Boot ID:                    d6a1817e-1794-4dc0-a9e4-b28facb916ef
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-lnhq6                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     101s
	  kube-system                 etcd-test-preload-961693                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         105s
	  kube-system                 kube-apiserver-test-preload-961693             250m (12%)    0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-controller-manager-test-preload-961693    200m (10%)    0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-proxy-zgsbw                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-scheduler-test-preload-961693             100m (5%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 14s                kube-proxy       
	  Normal   Starting                 99s                kube-proxy       
	  Normal   NodeHasSufficientPID     105s               kubelet          Node test-preload-961693 status is now: NodeHasSufficientPID
	  Normal   Starting                 105s               kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  105s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  105s               kubelet          Node test-preload-961693 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    105s               kubelet          Node test-preload-961693 status is now: NodeHasNoDiskPressure
	  Normal   NodeReady                104s               kubelet          Node test-preload-961693 status is now: NodeReady
	  Normal   RegisteredNode           102s               node-controller  Node test-preload-961693 event: Registered Node test-preload-961693 in Controller
	  Normal   CIDRAssignmentFailed     101s               cidrAllocator    Node test-preload-961693 status is now: CIDRAssignmentFailed
	  Normal   Starting                 21s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node test-preload-961693 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node test-preload-961693 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node test-preload-961693 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 16s                kubelet          Node test-preload-961693 has been rebooted, boot id: d6a1817e-1794-4dc0-a9e4-b28facb916ef
	  Normal   RegisteredNode           13s                node-controller  Node test-preload-961693 event: Registered Node test-preload-961693 in Controller
	
	
	==> dmesg <==
	[Oct27 20:07] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000066] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.006452] (rpcbind)[117]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.063694] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000004] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.087217] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.104291] kauditd_printk_skb: 102 callbacks suppressed
	[  +6.515948] kauditd_printk_skb: 177 callbacks suppressed
	[Oct27 20:08] kauditd_printk_skb: 128 callbacks suppressed
	
	
	==> etcd [94c132836f4404ff3fce09baa12790cf252aabb4485554afc491cd13c3ce0beb] <==
	{"level":"info","ts":"2025-10-27T20:07:55.673326Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"4cd5d1376c5e8c88","local-member-id":"ce9e8f286885b37e","added-peer-id":"ce9e8f286885b37e","added-peer-peer-urls":["https://192.168.39.215:2380"]}
	{"level":"info","ts":"2025-10-27T20:07:55.673429Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4cd5d1376c5e8c88","local-member-id":"ce9e8f286885b37e","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T20:07:55.673451Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T20:07:55.675681Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-27T20:07:55.691696Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-27T20:07:55.699084Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"ce9e8f286885b37e","initial-advertise-peer-urls":["https://192.168.39.215:2380"],"listen-peer-urls":["https://192.168.39.215:2380"],"advertise-client-urls":["https://192.168.39.215:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.215:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-27T20:07:55.699638Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-27T20:07:55.694614Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.215:2380"}
	{"level":"info","ts":"2025-10-27T20:07:55.702699Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.215:2380"}
	{"level":"info","ts":"2025-10-27T20:07:57.312319Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce9e8f286885b37e is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-27T20:07:57.312355Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce9e8f286885b37e became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-27T20:07:57.312390Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce9e8f286885b37e received MsgPreVoteResp from ce9e8f286885b37e at term 2"}
	{"level":"info","ts":"2025-10-27T20:07:57.312413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce9e8f286885b37e became candidate at term 3"}
	{"level":"info","ts":"2025-10-27T20:07:57.312419Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce9e8f286885b37e received MsgVoteResp from ce9e8f286885b37e at term 3"}
	{"level":"info","ts":"2025-10-27T20:07:57.312441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce9e8f286885b37e became leader at term 3"}
	{"level":"info","ts":"2025-10-27T20:07:57.312454Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ce9e8f286885b37e elected leader ce9e8f286885b37e at term 3"}
	{"level":"info","ts":"2025-10-27T20:07:57.315266Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"ce9e8f286885b37e","local-member-attributes":"{Name:test-preload-961693 ClientURLs:[https://192.168.39.215:2379]}","request-path":"/0/members/ce9e8f286885b37e/attributes","cluster-id":"4cd5d1376c5e8c88","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-27T20:07:57.315403Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-27T20:07:57.315524Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-27T20:07:57.316345Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-27T20:07:57.316464Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-27T20:07:57.316487Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-27T20:07:57.316827Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-27T20:07:57.316953Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-27T20:07:57.317397Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.215:2379"}
	
	
	==> kernel <==
	 20:08:14 up 0 min,  0 users,  load average: 1.03, 0.28, 0.09
	Linux test-preload-961693 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [674d3dbe2fb4a985089401e31835b75799b2c2492b1df23478e76562b9c7ea5e] <==
	I1027 20:07:58.574509       1 aggregator.go:171] initial CRD sync complete...
	I1027 20:07:58.574626       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 20:07:58.574634       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 20:07:58.574663       1 cache.go:39] Caches are synced for autoregister controller
	I1027 20:07:58.587108       1 shared_informer.go:320] Caches are synced for configmaps
	I1027 20:07:58.587243       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1027 20:07:58.594045       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1027 20:07:58.603418       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1027 20:07:58.603445       1 policy_source.go:240] refreshing policies
	I1027 20:07:58.616443       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 20:07:58.644858       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 20:07:58.649065       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1027 20:07:58.650931       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1027 20:07:58.651025       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1027 20:07:58.652311       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1027 20:07:58.671633       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 20:07:59.428682       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1027 20:07:59.467132       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 20:08:00.180390       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1027 20:08:00.234948       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1027 20:08:00.287941       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 20:08:00.296947       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 20:08:01.825654       1 controller.go:615] quota admission added evaluator for: endpoints
	I1027 20:08:01.872309       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 20:08:02.122017       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [cb629379a8e33dc92c7969f54ea7036df976da2050b817ea3ab0426799bb81e0] <==
	I1027 20:08:01.768923       1 shared_informer.go:320] Caches are synced for deployment
	I1027 20:08:01.769718       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1027 20:08:01.769834       1 shared_informer.go:320] Caches are synced for cronjob
	I1027 20:08:01.769884       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1027 20:08:01.771110       1 shared_informer.go:320] Caches are synced for garbage collector
	I1027 20:08:01.771141       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 20:08:01.771147       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 20:08:01.773941       1 shared_informer.go:320] Caches are synced for resource quota
	I1027 20:08:01.811169       1 shared_informer.go:320] Caches are synced for garbage collector
	I1027 20:08:01.813338       1 shared_informer.go:320] Caches are synced for taint
	I1027 20:08:01.813631       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 20:08:01.813727       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="test-preload-961693"
	I1027 20:08:01.813770       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1027 20:08:01.818766       1 shared_informer.go:320] Caches are synced for ephemeral
	I1027 20:08:01.818833       1 shared_informer.go:320] Caches are synced for job
	I1027 20:08:01.818930       1 shared_informer.go:320] Caches are synced for GC
	I1027 20:08:01.840148       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-961693"
	I1027 20:08:02.129093       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="362.000637ms"
	I1027 20:08:02.129375       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="55.328µs"
	I1027 20:08:07.564911       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="262.482µs"
	I1027 20:08:07.598818       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="14.949777ms"
	I1027 20:08:07.599112       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="54.502µs"
	I1027 20:08:09.112666       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-961693"
	I1027 20:08:09.126114       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-961693"
	I1027 20:08:11.815794       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [941655f4452bab1372a1fee266df0cacba7b9982bef6b4209abe982059ef12ef] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1027 20:08:00.148170       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1027 20:08:00.170919       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.215"]
	E1027 20:08:00.171019       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 20:08:00.250393       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1027 20:08:00.250492       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1027 20:08:00.250662       1 server_linux.go:170] "Using iptables Proxier"
	I1027 20:08:00.256271       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 20:08:00.257403       1 server.go:497] "Version info" version="v1.32.0"
	I1027 20:08:00.257417       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 20:08:00.262048       1 config.go:199] "Starting service config controller"
	I1027 20:08:00.262159       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1027 20:08:00.262200       1 config.go:105] "Starting endpoint slice config controller"
	I1027 20:08:00.262206       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1027 20:08:00.267183       1 config.go:329] "Starting node config controller"
	I1027 20:08:00.267411       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1027 20:08:00.362233       1 shared_informer.go:320] Caches are synced for service config
	I1027 20:08:00.362303       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1027 20:08:00.369486       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [47c9d690e076f0dd97ba01e9986285a22c8baab2ec1dcc97448bc896d29c7e0a] <==
	I1027 20:07:56.622235       1 serving.go:386] Generated self-signed cert in-memory
	W1027 20:07:58.497743       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 20:07:58.497871       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 20:07:58.497938       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 20:07:58.497967       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 20:07:58.567494       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1027 20:07:58.569606       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 20:07:58.573151       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1027 20:07:58.573153       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:07:58.576060       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1027 20:07:58.573183       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 20:07:58.676774       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 27 20:07:58 test-preload-961693 kubelet[1164]: I1027 20:07:58.713402    1164 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 27 20:07:58 test-preload-961693 kubelet[1164]: I1027 20:07:58.716385    1164 setters.go:602] "Node became not ready" node="test-preload-961693" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-27T20:07:58Z","lastTransitionTime":"2025-10-27T20:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Oct 27 20:07:58 test-preload-961693 kubelet[1164]: E1027 20:07:58.732707    1164 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-961693\" already exists" pod="kube-system/etcd-test-preload-961693"
	Oct 27 20:07:59 test-preload-961693 kubelet[1164]: I1027 20:07:59.307316    1164 apiserver.go:52] "Watching apiserver"
	Oct 27 20:07:59 test-preload-961693 kubelet[1164]: E1027 20:07:59.313971    1164 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-lnhq6" podUID="18552cbc-74a9-427a-a871-8c7e1da26a73"
	Oct 27 20:07:59 test-preload-961693 kubelet[1164]: I1027 20:07:59.334247    1164 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Oct 27 20:07:59 test-preload-961693 kubelet[1164]: I1027 20:07:59.416892    1164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1e83cd2d-5f98-4bc3-9fbf-db72b7bf2774-tmp\") pod \"storage-provisioner\" (UID: \"1e83cd2d-5f98-4bc3-9fbf-db72b7bf2774\") " pod="kube-system/storage-provisioner"
	Oct 27 20:07:59 test-preload-961693 kubelet[1164]: I1027 20:07:59.416944    1164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/001949f9-6828-4e36-a92b-0b8e41869ea1-xtables-lock\") pod \"kube-proxy-zgsbw\" (UID: \"001949f9-6828-4e36-a92b-0b8e41869ea1\") " pod="kube-system/kube-proxy-zgsbw"
	Oct 27 20:07:59 test-preload-961693 kubelet[1164]: I1027 20:07:59.416963    1164 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/001949f9-6828-4e36-a92b-0b8e41869ea1-lib-modules\") pod \"kube-proxy-zgsbw\" (UID: \"001949f9-6828-4e36-a92b-0b8e41869ea1\") " pod="kube-system/kube-proxy-zgsbw"
	Oct 27 20:07:59 test-preload-961693 kubelet[1164]: E1027 20:07:59.417062    1164 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 27 20:07:59 test-preload-961693 kubelet[1164]: E1027 20:07:59.417131    1164 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/18552cbc-74a9-427a-a871-8c7e1da26a73-config-volume podName:18552cbc-74a9-427a-a871-8c7e1da26a73 nodeName:}" failed. No retries permitted until 2025-10-27 20:07:59.917107966 +0000 UTC m=+6.707185043 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/18552cbc-74a9-427a-a871-8c7e1da26a73-config-volume") pod "coredns-668d6bf9bc-lnhq6" (UID: "18552cbc-74a9-427a-a871-8c7e1da26a73") : object "kube-system"/"coredns" not registered
	Oct 27 20:07:59 test-preload-961693 kubelet[1164]: I1027 20:07:59.488865    1164 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-961693"
	Oct 27 20:07:59 test-preload-961693 kubelet[1164]: E1027 20:07:59.501428    1164 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-961693\" already exists" pod="kube-system/kube-apiserver-test-preload-961693"
	Oct 27 20:07:59 test-preload-961693 kubelet[1164]: E1027 20:07:59.921853    1164 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 27 20:07:59 test-preload-961693 kubelet[1164]: E1027 20:07:59.921914    1164 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/18552cbc-74a9-427a-a871-8c7e1da26a73-config-volume podName:18552cbc-74a9-427a-a871-8c7e1da26a73 nodeName:}" failed. No retries permitted until 2025-10-27 20:08:00.921901759 +0000 UTC m=+7.711978837 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/18552cbc-74a9-427a-a871-8c7e1da26a73-config-volume") pod "coredns-668d6bf9bc-lnhq6" (UID: "18552cbc-74a9-427a-a871-8c7e1da26a73") : object "kube-system"/"coredns" not registered
	Oct 27 20:08:00 test-preload-961693 kubelet[1164]: E1027 20:08:00.930634    1164 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 27 20:08:00 test-preload-961693 kubelet[1164]: E1027 20:08:00.931281    1164 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/18552cbc-74a9-427a-a871-8c7e1da26a73-config-volume podName:18552cbc-74a9-427a-a871-8c7e1da26a73 nodeName:}" failed. No retries permitted until 2025-10-27 20:08:02.931249236 +0000 UTC m=+9.721326314 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/18552cbc-74a9-427a-a871-8c7e1da26a73-config-volume") pod "coredns-668d6bf9bc-lnhq6" (UID: "18552cbc-74a9-427a-a871-8c7e1da26a73") : object "kube-system"/"coredns" not registered
	Oct 27 20:08:01 test-preload-961693 kubelet[1164]: E1027 20:08:01.352364    1164 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-lnhq6" podUID="18552cbc-74a9-427a-a871-8c7e1da26a73"
	Oct 27 20:08:02 test-preload-961693 kubelet[1164]: E1027 20:08:02.944949    1164 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 27 20:08:02 test-preload-961693 kubelet[1164]: E1027 20:08:02.945061    1164 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/18552cbc-74a9-427a-a871-8c7e1da26a73-config-volume podName:18552cbc-74a9-427a-a871-8c7e1da26a73 nodeName:}" failed. No retries permitted until 2025-10-27 20:08:06.945046961 +0000 UTC m=+13.735124040 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/18552cbc-74a9-427a-a871-8c7e1da26a73-config-volume") pod "coredns-668d6bf9bc-lnhq6" (UID: "18552cbc-74a9-427a-a871-8c7e1da26a73") : object "kube-system"/"coredns" not registered
	Oct 27 20:08:03 test-preload-961693 kubelet[1164]: E1027 20:08:03.354922    1164 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-lnhq6" podUID="18552cbc-74a9-427a-a871-8c7e1da26a73"
	Oct 27 20:08:03 test-preload-961693 kubelet[1164]: E1027 20:08:03.402756    1164 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761595683402364444,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 27 20:08:03 test-preload-961693 kubelet[1164]: E1027 20:08:03.402781    1164 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761595683402364444,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 27 20:08:13 test-preload-961693 kubelet[1164]: E1027 20:08:13.404765    1164 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761595693403903794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 27 20:08:13 test-preload-961693 kubelet[1164]: E1027 20:08:13.404789    1164 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761595693403903794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [4e6893ab2da7f1c04c78c14852259f2eed5b74d1cb4c7a807bc6c0a3c38166c5] <==
	I1027 20:07:59.957769       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-961693 -n test-preload-961693
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-961693 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-961693" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-961693
--- FAIL: TestPreload (161.93s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (70.29s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-145997 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-145997 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m5.013206561s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-145997] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21801
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21801-58821/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-58821/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-145997" primary control-plane node in "pause-145997" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-145997" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 20:14:11.382372   96119 out.go:360] Setting OutFile to fd 1 ...
	I1027 20:14:11.382542   96119 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:14:11.382554   96119 out.go:374] Setting ErrFile to fd 2...
	I1027 20:14:11.382560   96119 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:14:11.382775   96119 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-58821/.minikube/bin
	I1027 20:14:11.383218   96119 out.go:368] Setting JSON to false
	I1027 20:14:11.384119   96119 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":10601,"bootTime":1761585450,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 20:14:11.384209   96119 start.go:141] virtualization: kvm guest
	I1027 20:14:11.386253   96119 out.go:179] * [pause-145997] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 20:14:11.387417   96119 notify.go:220] Checking for updates...
	I1027 20:14:11.387442   96119 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 20:14:11.388627   96119 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 20:14:11.389837   96119 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-58821/kubeconfig
	I1027 20:14:11.391062   96119 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-58821/.minikube
	I1027 20:14:11.392203   96119 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 20:14:11.393390   96119 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 20:14:11.395006   96119 config.go:182] Loaded profile config "pause-145997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:14:11.395439   96119 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 20:14:11.427585   96119 out.go:179] * Using the kvm2 driver based on existing profile
	I1027 20:14:11.428575   96119 start.go:305] selected driver: kvm2
	I1027 20:14:11.428590   96119 start.go:925] validating driver "kvm2" against &{Name:pause-145997 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-145997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.115 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:14:11.428715   96119 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 20:14:11.429668   96119 cni.go:84] Creating CNI manager for ""
	I1027 20:14:11.429729   96119 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 20:14:11.429775   96119 start.go:349] cluster config:
	{Name:pause-145997 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-145997 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.115 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:14:11.429909   96119 iso.go:125] acquiring lock: {Name:mkbd04910579486806c142a651be4f82498c73ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 20:14:11.431451   96119 out.go:179] * Starting "pause-145997" primary control-plane node in "pause-145997" cluster
	I1027 20:14:11.432399   96119 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 20:14:11.432433   96119 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-58821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 20:14:11.432447   96119 cache.go:58] Caching tarball of preloaded images
	I1027 20:14:11.432514   96119 preload.go:233] Found /home/jenkins/minikube-integration/21801-58821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 20:14:11.432524   96119 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 20:14:11.432624   96119 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/pause-145997/config.json ...
	I1027 20:14:11.432816   96119 start.go:360] acquireMachinesLock for pause-145997: {Name:mk93a855054c8dcf81931234082a94fdc68a4726 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1027 20:14:38.874501   96119 start.go:364] duration metric: took 27.441627935s to acquireMachinesLock for "pause-145997"
	I1027 20:14:38.874558   96119 start.go:96] Skipping create...Using existing machine configuration
	I1027 20:14:38.874567   96119 fix.go:54] fixHost starting: 
	I1027 20:14:38.877302   96119 fix.go:112] recreateIfNeeded on pause-145997: state=Running err=<nil>
	W1027 20:14:38.877339   96119 fix.go:138] unexpected machine state, will restart: <nil>
	I1027 20:14:38.879656   96119 out.go:252] * Updating the running kvm2 "pause-145997" VM ...
	I1027 20:14:38.879689   96119 machine.go:93] provisionDockerMachine start ...
	I1027 20:14:38.883428   96119 main.go:141] libmachine: domain pause-145997 has defined MAC address 52:54:00:b7:25:11 in network mk-pause-145997
	I1027 20:14:38.884129   96119 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b7:25:11", ip: ""} in network mk-pause-145997: {Iface:virbr4 ExpiryTime:2025-10-27 21:13:31 +0000 UTC Type:0 Mac:52:54:00:b7:25:11 Iaid: IPaddr:192.168.72.115 Prefix:24 Hostname:pause-145997 Clientid:01:52:54:00:b7:25:11}
	I1027 20:14:38.884163   96119 main.go:141] libmachine: domain pause-145997 has defined IP address 192.168.72.115 and MAC address 52:54:00:b7:25:11 in network mk-pause-145997
	I1027 20:14:38.884702   96119 main.go:141] libmachine: Using SSH client type: native
	I1027 20:14:38.884970   96119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.72.115 22 <nil> <nil>}
	I1027 20:14:38.884985   96119 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 20:14:38.996872   96119 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-145997
	
	I1027 20:14:38.996898   96119 buildroot.go:166] provisioning hostname "pause-145997"
	I1027 20:14:39.000451   96119 main.go:141] libmachine: domain pause-145997 has defined MAC address 52:54:00:b7:25:11 in network mk-pause-145997
	I1027 20:14:39.001015   96119 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b7:25:11", ip: ""} in network mk-pause-145997: {Iface:virbr4 ExpiryTime:2025-10-27 21:13:31 +0000 UTC Type:0 Mac:52:54:00:b7:25:11 Iaid: IPaddr:192.168.72.115 Prefix:24 Hostname:pause-145997 Clientid:01:52:54:00:b7:25:11}
	I1027 20:14:39.001060   96119 main.go:141] libmachine: domain pause-145997 has defined IP address 192.168.72.115 and MAC address 52:54:00:b7:25:11 in network mk-pause-145997
	I1027 20:14:39.001314   96119 main.go:141] libmachine: Using SSH client type: native
	I1027 20:14:39.001603   96119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.72.115 22 <nil> <nil>}
	I1027 20:14:39.001619   96119 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-145997 && echo "pause-145997" | sudo tee /etc/hostname
	I1027 20:14:39.126433   96119 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-145997
	
	I1027 20:14:39.129316   96119 main.go:141] libmachine: domain pause-145997 has defined MAC address 52:54:00:b7:25:11 in network mk-pause-145997
	I1027 20:14:39.129726   96119 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b7:25:11", ip: ""} in network mk-pause-145997: {Iface:virbr4 ExpiryTime:2025-10-27 21:13:31 +0000 UTC Type:0 Mac:52:54:00:b7:25:11 Iaid: IPaddr:192.168.72.115 Prefix:24 Hostname:pause-145997 Clientid:01:52:54:00:b7:25:11}
	I1027 20:14:39.129768   96119 main.go:141] libmachine: domain pause-145997 has defined IP address 192.168.72.115 and MAC address 52:54:00:b7:25:11 in network mk-pause-145997
	I1027 20:14:39.129984   96119 main.go:141] libmachine: Using SSH client type: native
	I1027 20:14:39.130211   96119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.72.115 22 <nil> <nil>}
	I1027 20:14:39.130227   96119 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-145997' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-145997/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-145997' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 20:14:39.236589   96119 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 20:14:39.236623   96119 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21801-58821/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-58821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-58821/.minikube}
	I1027 20:14:39.236677   96119 buildroot.go:174] setting up certificates
	I1027 20:14:39.236688   96119 provision.go:84] configureAuth start
	I1027 20:14:39.240282   96119 main.go:141] libmachine: domain pause-145997 has defined MAC address 52:54:00:b7:25:11 in network mk-pause-145997
	I1027 20:14:39.240819   96119 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b7:25:11", ip: ""} in network mk-pause-145997: {Iface:virbr4 ExpiryTime:2025-10-27 21:13:31 +0000 UTC Type:0 Mac:52:54:00:b7:25:11 Iaid: IPaddr:192.168.72.115 Prefix:24 Hostname:pause-145997 Clientid:01:52:54:00:b7:25:11}
	I1027 20:14:39.240852   96119 main.go:141] libmachine: domain pause-145997 has defined IP address 192.168.72.115 and MAC address 52:54:00:b7:25:11 in network mk-pause-145997
	I1027 20:14:39.243415   96119 main.go:141] libmachine: domain pause-145997 has defined MAC address 52:54:00:b7:25:11 in network mk-pause-145997
	I1027 20:14:39.243744   96119 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b7:25:11", ip: ""} in network mk-pause-145997: {Iface:virbr4 ExpiryTime:2025-10-27 21:13:31 +0000 UTC Type:0 Mac:52:54:00:b7:25:11 Iaid: IPaddr:192.168.72.115 Prefix:24 Hostname:pause-145997 Clientid:01:52:54:00:b7:25:11}
	I1027 20:14:39.243765   96119 main.go:141] libmachine: domain pause-145997 has defined IP address 192.168.72.115 and MAC address 52:54:00:b7:25:11 in network mk-pause-145997
	I1027 20:14:39.243950   96119 provision.go:143] copyHostCerts
	I1027 20:14:39.244021   96119 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-58821/.minikube/ca.pem, removing ...
	I1027 20:14:39.244049   96119 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-58821/.minikube/ca.pem
	I1027 20:14:39.244122   96119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-58821/.minikube/ca.pem (1078 bytes)
	I1027 20:14:39.244245   96119 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-58821/.minikube/cert.pem, removing ...
	I1027 20:14:39.244256   96119 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-58821/.minikube/cert.pem
	I1027 20:14:39.244291   96119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-58821/.minikube/cert.pem (1123 bytes)
	I1027 20:14:39.244377   96119 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-58821/.minikube/key.pem, removing ...
	I1027 20:14:39.244387   96119 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-58821/.minikube/key.pem
	I1027 20:14:39.244417   96119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-58821/.minikube/key.pem (1675 bytes)
	I1027 20:14:39.244494   96119 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-58821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca-key.pem org=jenkins.pause-145997 san=[127.0.0.1 192.168.72.115 localhost minikube pause-145997]
	I1027 20:14:39.382844   96119 provision.go:177] copyRemoteCerts
	I1027 20:14:39.382922   96119 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 20:14:39.385591   96119 main.go:141] libmachine: domain pause-145997 has defined MAC address 52:54:00:b7:25:11 in network mk-pause-145997
	I1027 20:14:39.385967   96119 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b7:25:11", ip: ""} in network mk-pause-145997: {Iface:virbr4 ExpiryTime:2025-10-27 21:13:31 +0000 UTC Type:0 Mac:52:54:00:b7:25:11 Iaid: IPaddr:192.168.72.115 Prefix:24 Hostname:pause-145997 Clientid:01:52:54:00:b7:25:11}
	I1027 20:14:39.385989   96119 main.go:141] libmachine: domain pause-145997 has defined IP address 192.168.72.115 and MAC address 52:54:00:b7:25:11 in network mk-pause-145997
	I1027 20:14:39.386137   96119 sshutil.go:53] new ssh client: &{IP:192.168.72.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/pause-145997/id_rsa Username:docker}
	I1027 20:14:39.474314   96119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 20:14:39.508546   96119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1027 20:14:39.542159   96119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 20:14:39.573411   96119 provision.go:87] duration metric: took 336.705472ms to configureAuth
	I1027 20:14:39.573448   96119 buildroot.go:189] setting minikube options for container-runtime
	I1027 20:14:39.573725   96119 config.go:182] Loaded profile config "pause-145997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:14:39.577132   96119 main.go:141] libmachine: domain pause-145997 has defined MAC address 52:54:00:b7:25:11 in network mk-pause-145997
	I1027 20:14:39.577589   96119 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b7:25:11", ip: ""} in network mk-pause-145997: {Iface:virbr4 ExpiryTime:2025-10-27 21:13:31 +0000 UTC Type:0 Mac:52:54:00:b7:25:11 Iaid: IPaddr:192.168.72.115 Prefix:24 Hostname:pause-145997 Clientid:01:52:54:00:b7:25:11}
	I1027 20:14:39.577653   96119 main.go:141] libmachine: domain pause-145997 has defined IP address 192.168.72.115 and MAC address 52:54:00:b7:25:11 in network mk-pause-145997
	I1027 20:14:39.577911   96119 main.go:141] libmachine: Using SSH client type: native
	I1027 20:14:39.578150   96119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.72.115 22 <nil> <nil>}
	I1027 20:14:39.578167   96119 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 20:14:45.135008   96119 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 20:14:45.135049   96119 machine.go:96] duration metric: took 6.255333875s to provisionDockerMachine
	I1027 20:14:45.135065   96119 start.go:293] postStartSetup for "pause-145997" (driver="kvm2")
	I1027 20:14:45.135079   96119 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 20:14:45.135152   96119 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 20:14:45.138273   96119 main.go:141] libmachine: domain pause-145997 has defined MAC address 52:54:00:b7:25:11 in network mk-pause-145997
	I1027 20:14:45.138694   96119 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b7:25:11", ip: ""} in network mk-pause-145997: {Iface:virbr4 ExpiryTime:2025-10-27 21:13:31 +0000 UTC Type:0 Mac:52:54:00:b7:25:11 Iaid: IPaddr:192.168.72.115 Prefix:24 Hostname:pause-145997 Clientid:01:52:54:00:b7:25:11}
	I1027 20:14:45.138728   96119 main.go:141] libmachine: domain pause-145997 has defined IP address 192.168.72.115 and MAC address 52:54:00:b7:25:11 in network mk-pause-145997
	I1027 20:14:45.138891   96119 sshutil.go:53] new ssh client: &{IP:192.168.72.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/pause-145997/id_rsa Username:docker}
	I1027 20:14:45.224802   96119 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 20:14:45.230351   96119 info.go:137] Remote host: Buildroot 2025.02
	I1027 20:14:45.230375   96119 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-58821/.minikube/addons for local assets ...
	I1027 20:14:45.230460   96119 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-58821/.minikube/files for local assets ...
	I1027 20:14:45.230578   96119 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-58821/.minikube/files/etc/ssl/certs/627052.pem -> 627052.pem in /etc/ssl/certs
	I1027 20:14:45.230686   96119 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 20:14:45.243513   96119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/files/etc/ssl/certs/627052.pem --> /etc/ssl/certs/627052.pem (1708 bytes)
	I1027 20:14:45.279452   96119 start.go:296] duration metric: took 144.368175ms for postStartSetup
	I1027 20:14:45.279504   96119 fix.go:56] duration metric: took 6.404936677s for fixHost
	I1027 20:14:45.282490   96119 main.go:141] libmachine: domain pause-145997 has defined MAC address 52:54:00:b7:25:11 in network mk-pause-145997
	I1027 20:14:45.282900   96119 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b7:25:11", ip: ""} in network mk-pause-145997: {Iface:virbr4 ExpiryTime:2025-10-27 21:13:31 +0000 UTC Type:0 Mac:52:54:00:b7:25:11 Iaid: IPaddr:192.168.72.115 Prefix:24 Hostname:pause-145997 Clientid:01:52:54:00:b7:25:11}
	I1027 20:14:45.282922   96119 main.go:141] libmachine: domain pause-145997 has defined IP address 192.168.72.115 and MAC address 52:54:00:b7:25:11 in network mk-pause-145997
	I1027 20:14:45.283139   96119 main.go:141] libmachine: Using SSH client type: native
	I1027 20:14:45.283406   96119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.72.115 22 <nil> <nil>}
	I1027 20:14:45.283421   96119 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1027 20:14:45.391297   96119 main.go:141] libmachine: SSH cmd err, output: <nil>: 1761596085.383400536
	
	I1027 20:14:45.391323   96119 fix.go:216] guest clock: 1761596085.383400536
	I1027 20:14:45.391346   96119 fix.go:229] Guest: 2025-10-27 20:14:45.383400536 +0000 UTC Remote: 2025-10-27 20:14:45.279511515 +0000 UTC m=+33.947833321 (delta=103.889021ms)
	I1027 20:14:45.391388   96119 fix.go:200] guest clock delta is within tolerance: 103.889021ms
	I1027 20:14:45.391394   96119 start.go:83] releasing machines lock for "pause-145997", held for 6.516861908s
	I1027 20:14:45.394524   96119 main.go:141] libmachine: domain pause-145997 has defined MAC address 52:54:00:b7:25:11 in network mk-pause-145997
	I1027 20:14:45.395009   96119 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b7:25:11", ip: ""} in network mk-pause-145997: {Iface:virbr4 ExpiryTime:2025-10-27 21:13:31 +0000 UTC Type:0 Mac:52:54:00:b7:25:11 Iaid: IPaddr:192.168.72.115 Prefix:24 Hostname:pause-145997 Clientid:01:52:54:00:b7:25:11}
	I1027 20:14:45.395054   96119 main.go:141] libmachine: domain pause-145997 has defined IP address 192.168.72.115 and MAC address 52:54:00:b7:25:11 in network mk-pause-145997
	I1027 20:14:45.395671   96119 ssh_runner.go:195] Run: cat /version.json
	I1027 20:14:45.395744   96119 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 20:14:45.399009   96119 main.go:141] libmachine: domain pause-145997 has defined MAC address 52:54:00:b7:25:11 in network mk-pause-145997
	I1027 20:14:45.399210   96119 main.go:141] libmachine: domain pause-145997 has defined MAC address 52:54:00:b7:25:11 in network mk-pause-145997
	I1027 20:14:45.399508   96119 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b7:25:11", ip: ""} in network mk-pause-145997: {Iface:virbr4 ExpiryTime:2025-10-27 21:13:31 +0000 UTC Type:0 Mac:52:54:00:b7:25:11 Iaid: IPaddr:192.168.72.115 Prefix:24 Hostname:pause-145997 Clientid:01:52:54:00:b7:25:11}
	I1027 20:14:45.399541   96119 main.go:141] libmachine: domain pause-145997 has defined IP address 192.168.72.115 and MAC address 52:54:00:b7:25:11 in network mk-pause-145997
	I1027 20:14:45.399646   96119 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b7:25:11", ip: ""} in network mk-pause-145997: {Iface:virbr4 ExpiryTime:2025-10-27 21:13:31 +0000 UTC Type:0 Mac:52:54:00:b7:25:11 Iaid: IPaddr:192.168.72.115 Prefix:24 Hostname:pause-145997 Clientid:01:52:54:00:b7:25:11}
	I1027 20:14:45.399682   96119 main.go:141] libmachine: domain pause-145997 has defined IP address 192.168.72.115 and MAC address 52:54:00:b7:25:11 in network mk-pause-145997
	I1027 20:14:45.399714   96119 sshutil.go:53] new ssh client: &{IP:192.168.72.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/pause-145997/id_rsa Username:docker}
	I1027 20:14:45.399937   96119 sshutil.go:53] new ssh client: &{IP:192.168.72.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/pause-145997/id_rsa Username:docker}
	I1027 20:14:45.483169   96119 ssh_runner.go:195] Run: systemctl --version
	I1027 20:14:45.511188   96119 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 20:14:45.668904   96119 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 20:14:45.679603   96119 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 20:14:45.679679   96119 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 20:14:45.694860   96119 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 20:14:45.694896   96119 start.go:495] detecting cgroup driver to use...
	I1027 20:14:45.694983   96119 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 20:14:45.724052   96119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 20:14:45.744866   96119 docker.go:218] disabling cri-docker service (if available) ...
	I1027 20:14:45.744951   96119 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 20:14:45.775763   96119 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 20:14:45.793202   96119 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 20:14:45.987842   96119 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 20:14:46.163867   96119 docker.go:234] disabling docker service ...
	I1027 20:14:46.163943   96119 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 20:14:46.195996   96119 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 20:14:46.212847   96119 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 20:14:46.432873   96119 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 20:14:46.625261   96119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 20:14:46.649547   96119 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 20:14:46.679325   96119 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 20:14:46.679388   96119 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:14:46.693495   96119 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 20:14:46.693571   96119 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:14:46.708209   96119 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:14:46.723259   96119 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:14:46.737590   96119 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 20:14:46.754001   96119 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:14:46.772136   96119 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:14:46.786246   96119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:14:46.804060   96119 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 20:14:46.817502   96119 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 20:14:46.831624   96119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:14:47.069890   96119 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 20:14:48.444339   96119 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.374405682s)
	I1027 20:14:48.444381   96119 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 20:14:48.444464   96119 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 20:14:48.451381   96119 start.go:563] Will wait 60s for crictl version
	I1027 20:14:48.451476   96119 ssh_runner.go:195] Run: which crictl
	I1027 20:14:48.456753   96119 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1027 20:14:48.499095   96119 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1027 20:14:48.499188   96119 ssh_runner.go:195] Run: crio --version
	I1027 20:14:48.534770   96119 ssh_runner.go:195] Run: crio --version
	I1027 20:14:48.571152   96119 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1027 20:14:48.575879   96119 main.go:141] libmachine: domain pause-145997 has defined MAC address 52:54:00:b7:25:11 in network mk-pause-145997
	I1027 20:14:48.576430   96119 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b7:25:11", ip: ""} in network mk-pause-145997: {Iface:virbr4 ExpiryTime:2025-10-27 21:13:31 +0000 UTC Type:0 Mac:52:54:00:b7:25:11 Iaid: IPaddr:192.168.72.115 Prefix:24 Hostname:pause-145997 Clientid:01:52:54:00:b7:25:11}
	I1027 20:14:48.576498   96119 main.go:141] libmachine: domain pause-145997 has defined IP address 192.168.72.115 and MAC address 52:54:00:b7:25:11 in network mk-pause-145997
	I1027 20:14:48.576706   96119 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1027 20:14:48.582327   96119 kubeadm.go:883] updating cluster {Name:pause-145997 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-145997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.115 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 20:14:48.582498   96119 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 20:14:48.582567   96119 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 20:14:48.648828   96119 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 20:14:48.648852   96119 crio.go:433] Images already preloaded, skipping extraction
	I1027 20:14:48.648905   96119 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 20:14:48.691004   96119 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 20:14:48.691043   96119 cache_images.go:85] Images are preloaded, skipping loading
	I1027 20:14:48.691054   96119 kubeadm.go:934] updating node { 192.168.72.115 8443 v1.34.1 crio true true} ...
	I1027 20:14:48.691174   96119 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-145997 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.115
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-145997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 20:14:48.691272   96119 ssh_runner.go:195] Run: crio config
	I1027 20:14:48.750384   96119 cni.go:84] Creating CNI manager for ""
	I1027 20:14:48.750417   96119 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 20:14:48.750452   96119 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 20:14:48.750486   96119 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.115 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-145997 NodeName:pause-145997 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.115"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.115 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 20:14:48.750694   96119 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.115
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-145997"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.115"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.115"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 20:14:48.750801   96119 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 20:14:48.765069   96119 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 20:14:48.765137   96119 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 20:14:48.778497   96119 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1027 20:14:48.805084   96119 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 20:14:48.835679   96119 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1027 20:14:48.861321   96119 ssh_runner.go:195] Run: grep 192.168.72.115	control-plane.minikube.internal$ /etc/hosts
	I1027 20:14:48.866501   96119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:14:49.051821   96119 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 20:14:49.144176   96119 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/pause-145997 for IP: 192.168.72.115
	I1027 20:14:49.144201   96119 certs.go:195] generating shared ca certs ...
	I1027 20:14:49.144217   96119 certs.go:227] acquiring lock for ca certs: {Name:mk3c1c890b4611f9f1a3f97b9046837227a16799 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:14:49.144408   96119 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-58821/.minikube/ca.key
	I1027 20:14:49.144484   96119 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.key
	I1027 20:14:49.144503   96119 certs.go:257] generating profile certs ...
	I1027 20:14:49.144623   96119 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/pause-145997/client.key
	I1027 20:14:49.144696   96119 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/pause-145997/apiserver.key.caef1847
	I1027 20:14:49.144751   96119 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/pause-145997/proxy-client.key
	I1027 20:14:49.144903   96119 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/62705.pem (1338 bytes)
	W1027 20:14:49.144935   96119 certs.go:480] ignoring /home/jenkins/minikube-integration/21801-58821/.minikube/certs/62705_empty.pem, impossibly tiny 0 bytes
	I1027 20:14:49.144942   96119 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca-key.pem (1679 bytes)
	I1027 20:14:49.144965   96119 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem (1078 bytes)
	I1027 20:14:49.144987   96119 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/cert.pem (1123 bytes)
	I1027 20:14:49.145010   96119 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/key.pem (1675 bytes)
	I1027 20:14:49.145077   96119 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/files/etc/ssl/certs/627052.pem (1708 bytes)
	I1027 20:14:49.145814   96119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 20:14:49.236169   96119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 20:14:49.306205   96119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 20:14:49.345178   96119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1027 20:14:49.384128   96119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/pause-145997/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1027 20:14:49.430633   96119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/pause-145997/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 20:14:49.468412   96119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/pause-145997/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 20:14:49.518481   96119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/pause-145997/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 20:14:49.556706   96119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/files/etc/ssl/certs/627052.pem --> /usr/share/ca-certificates/627052.pem (1708 bytes)
	I1027 20:14:49.596582   96119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 20:14:49.651995   96119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/certs/62705.pem --> /usr/share/ca-certificates/62705.pem (1338 bytes)
	I1027 20:14:49.762796   96119 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 20:14:49.818443   96119 ssh_runner.go:195] Run: openssl version
	I1027 20:14:49.832983   96119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/627052.pem && ln -fs /usr/share/ca-certificates/627052.pem /etc/ssl/certs/627052.pem"
	I1027 20:14:49.860262   96119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/627052.pem
	I1027 20:14:49.873826   96119 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:09 /usr/share/ca-certificates/627052.pem
	I1027 20:14:49.873901   96119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/627052.pem
	I1027 20:14:49.892472   96119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/627052.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 20:14:49.918501   96119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 20:14:49.950451   96119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:14:49.958026   96119 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:14:49.958114   96119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:14:49.972476   96119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 20:14:49.992500   96119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/62705.pem && ln -fs /usr/share/ca-certificates/62705.pem /etc/ssl/certs/62705.pem"
	I1027 20:14:50.017045   96119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/62705.pem
	I1027 20:14:50.030574   96119 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:09 /usr/share/ca-certificates/62705.pem
	I1027 20:14:50.030658   96119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/62705.pem
	I1027 20:14:50.047099   96119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/62705.pem /etc/ssl/certs/51391683.0"
	I1027 20:14:50.077430   96119 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 20:14:50.084320   96119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 20:14:50.094684   96119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 20:14:50.106343   96119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 20:14:50.121407   96119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 20:14:50.136746   96119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 20:14:50.145989   96119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 20:14:50.157884   96119 kubeadm.go:400] StartCluster: {Name:pause-145997 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-145997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.115 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:14:50.158073   96119 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 20:14:50.158170   96119 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 20:14:50.269778   96119 cri.go:89] found id: "921e5f1ba1c1b8110a1f33cd1ada6314ca2ed64bf4b8c5d850a6c716894257ca"
	I1027 20:14:50.269808   96119 cri.go:89] found id: "a139373de7fd05501114a4995b989c4548d3ce9876179050be3b9f77ea24633a"
	I1027 20:14:50.269815   96119 cri.go:89] found id: "4fc82ec50104917bca670b2e4ac750cd14acc60d37d1675ff8b4e216c8d44a9a"
	I1027 20:14:50.269821   96119 cri.go:89] found id: "59a7505d232443bca3d02905fc8c66a4ee8f6a5d5097c42c474de3698a8cd5bf"
	I1027 20:14:50.269827   96119 cri.go:89] found id: "f4fe26d13f640798c6ea20dd6f7972ec4eae0c5d48d146fd3606e571a939563c"
	I1027 20:14:50.269831   96119 cri.go:89] found id: "8f5d1271f2e4f30bdedf83cb576646bc27f6c3a809fec4473a753ba2f1601af1"
	I1027 20:14:50.269836   96119 cri.go:89] found id: "d0ac825360465f7a1045ebf617c3c8ad32c796ef84449e9787989c4349ed25ea"
	I1027 20:14:50.269840   96119 cri.go:89] found id: "680f628baf8b8c3d84e0482f891bca2fba687bcb9ad81f238fd665dbc1cfcd2b"
	I1027 20:14:50.269843   96119 cri.go:89] found id: "24d83c3a64c144128595f5cf1ddaecaa11a83ca971f99ff8638236cad0b3b519"
	I1027 20:14:50.269855   96119 cri.go:89] found id: ""
	I1027 20:14:50.269918   96119 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-145997 -n pause-145997
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-145997 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-145997 logs -n 25: (2.052925437s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                  ARGS                                                                  │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-764820 sudo cat /etc/docker/daemon.json                                                                                      │ cilium-764820             │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │                     │
	│ ssh     │ -p cilium-764820 sudo docker system info                                                                                               │ cilium-764820             │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │                     │
	│ ssh     │ -p cilium-764820 sudo systemctl status cri-docker --all --full --no-pager                                                              │ cilium-764820             │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │                     │
	│ ssh     │ -p cilium-764820 sudo systemctl cat cri-docker --no-pager                                                                              │ cilium-764820             │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │                     │
	│ ssh     │ -p cilium-764820 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                         │ cilium-764820             │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │                     │
	│ ssh     │ -p cilium-764820 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                   │ cilium-764820             │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │                     │
	│ ssh     │ -p cilium-764820 sudo cri-dockerd --version                                                                                            │ cilium-764820             │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │                     │
	│ ssh     │ -p cilium-764820 sudo systemctl status containerd --all --full --no-pager                                                              │ cilium-764820             │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │                     │
	│ ssh     │ -p cilium-764820 sudo systemctl cat containerd --no-pager                                                                              │ cilium-764820             │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │                     │
	│ ssh     │ -p cilium-764820 sudo cat /lib/systemd/system/containerd.service                                                                       │ cilium-764820             │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │                     │
	│ ssh     │ -p cilium-764820 sudo cat /etc/containerd/config.toml                                                                                  │ cilium-764820             │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │                     │
	│ ssh     │ -p cilium-764820 sudo containerd config dump                                                                                           │ cilium-764820             │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │                     │
	│ ssh     │ -p cilium-764820 sudo systemctl status crio --all --full --no-pager                                                                    │ cilium-764820             │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │                     │
	│ ssh     │ -p cilium-764820 sudo systemctl cat crio --no-pager                                                                                    │ cilium-764820             │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │                     │
	│ ssh     │ -p cilium-764820 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                          │ cilium-764820             │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │                     │
	│ ssh     │ -p cilium-764820 sudo crio config                                                                                                      │ cilium-764820             │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │                     │
	│ delete  │ -p cilium-764820                                                                                                                       │ cilium-764820             │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │ 27 Oct 25 20:13 UTC │
	│ start   │ -p guest-291039 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                │ guest-291039              │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │ 27 Oct 25 20:14 UTC │
	│ ssh     │ -p NoKubernetes-421237 sudo systemctl is-active --quiet service kubelet                                                                │ NoKubernetes-421237       │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │                     │
	│ delete  │ -p NoKubernetes-421237                                                                                                                 │ NoKubernetes-421237       │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │ 27 Oct 25 20:13 UTC │
	│ start   │ -p kubernetes-upgrade-176362 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio │ kubernetes-upgrade-176362 │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │ 27 Oct 25 20:15 UTC │
	│ start   │ -p pause-145997 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                         │ pause-145997              │ jenkins │ v1.37.0 │ 27 Oct 25 20:14 UTC │ 27 Oct 25 20:15 UTC │
	│ start   │ -p stopped-upgrade-246578 --memory=3072 --vm-driver=kvm2  --container-runtime=crio                                                     │ stopped-upgrade-246578    │ jenkins │ v1.32.0 │ 27 Oct 25 20:14 UTC │                     │
	│ stop    │ -p kubernetes-upgrade-176362                                                                                                           │ kubernetes-upgrade-176362 │ jenkins │ v1.37.0 │ 27 Oct 25 20:15 UTC │ 27 Oct 25 20:15 UTC │
	│ start   │ -p kubernetes-upgrade-176362 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio │ kubernetes-upgrade-176362 │ jenkins │ v1.37.0 │ 27 Oct 25 20:15 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 20:15:04
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 20:15:04.526743   96708 out.go:360] Setting OutFile to fd 1 ...
	I1027 20:15:04.527012   96708 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:15:04.527022   96708 out.go:374] Setting ErrFile to fd 2...
	I1027 20:15:04.527026   96708 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:15:04.527242   96708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-58821/.minikube/bin
	I1027 20:15:04.527698   96708 out.go:368] Setting JSON to false
	I1027 20:15:04.528621   96708 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":10655,"bootTime":1761585450,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 20:15:04.528742   96708 start.go:141] virtualization: kvm guest
	I1027 20:15:04.531441   96708 out.go:179] * [kubernetes-upgrade-176362] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 20:15:04.532935   96708 notify.go:220] Checking for updates...
	I1027 20:15:04.533003   96708 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 20:15:04.534425   96708 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 20:15:04.535939   96708 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-58821/kubeconfig
	I1027 20:15:04.537373   96708 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-58821/.minikube
	I1027 20:15:04.539121   96708 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 20:15:04.540644   96708 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 20:15:01.342342   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:01.342974   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | no network interface addresses found for domain stopped-upgrade-246578 (source=lease)
	I1027 20:15:01.342998   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | trying to list again with source=arp
	I1027 20:15:01.343439   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | unable to find current IP address of domain stopped-upgrade-246578 in network mk-stopped-upgrade-246578 (interfaces detected: [])
	I1027 20:15:01.343464   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | I1027 20:15:01.343400   96461 retry.go:31] will retry after 4.465038765s: waiting for domain to come up
	I1027 20:15:04.542531   96708 config.go:182] Loaded profile config "kubernetes-upgrade-176362": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1027 20:15:04.542927   96708 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 20:15:04.581129   96708 out.go:179] * Using the kvm2 driver based on existing profile
	I1027 20:15:04.582537   96708 start.go:305] selected driver: kvm2
	I1027 20:15:04.582553   96708 start.go:925] validating driver "kvm2" against &{Name:kubernetes-upgrade-176362 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-176362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.42 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:15:04.582646   96708 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 20:15:04.583603   96708 cni.go:84] Creating CNI manager for ""
	I1027 20:15:04.583725   96708 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 20:15:04.583772   96708 start.go:349] cluster config:
	{Name:kubernetes-upgrade-176362 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-176362 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.42 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:15:04.583860   96708 iso.go:125] acquiring lock: {Name:mkbd04910579486806c142a651be4f82498c73ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 20:15:04.585435   96708 out.go:179] * Starting "kubernetes-upgrade-176362" primary control-plane node in "kubernetes-upgrade-176362" cluster
	I1027 20:15:02.037741   96119 addons.go:514] duration metric: took 3.232021ms for enable addons: enabled=[]
	I1027 20:15:02.037900   96119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:15:02.260341   96119 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 20:15:02.286364   96119 node_ready.go:35] waiting up to 6m0s for node "pause-145997" to be "Ready" ...
	I1027 20:15:02.289911   96119 node_ready.go:49] node "pause-145997" is "Ready"
	I1027 20:15:02.289957   96119 node_ready.go:38] duration metric: took 3.541549ms for node "pause-145997" to be "Ready" ...
	I1027 20:15:02.289977   96119 api_server.go:52] waiting for apiserver process to appear ...
	I1027 20:15:02.290053   96119 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 20:15:02.318255   96119 api_server.go:72] duration metric: took 283.786236ms to wait for apiserver process to appear ...
	I1027 20:15:02.318290   96119 api_server.go:88] waiting for apiserver healthz status ...
	I1027 20:15:02.318319   96119 api_server.go:253] Checking apiserver healthz at https://192.168.72.115:8443/healthz ...
	I1027 20:15:02.327745   96119 api_server.go:279] https://192.168.72.115:8443/healthz returned 200:
	ok
	I1027 20:15:02.329684   96119 api_server.go:141] control plane version: v1.34.1
	I1027 20:15:02.329708   96119 api_server.go:131] duration metric: took 11.408278ms to wait for apiserver health ...
	I1027 20:15:02.329720   96119 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 20:15:02.333940   96119 system_pods.go:59] 6 kube-system pods found
	I1027 20:15:02.333990   96119 system_pods.go:61] "coredns-66bc5c9577-4qs4m" [92c6d26c-1ff4-4a98-b0f6-963244a8a802] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:15:02.334001   96119 system_pods.go:61] "etcd-pause-145997" [08d8f65d-3056-48ee-9d16-a448d38ba1e1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 20:15:02.334014   96119 system_pods.go:61] "kube-apiserver-pause-145997" [da350639-ea13-402e-8856-3e304c9bc93a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 20:15:02.334029   96119 system_pods.go:61] "kube-controller-manager-pause-145997" [80f89714-5130-4df9-b9b0-bd9cc6bd5b2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 20:15:02.334054   96119 system_pods.go:61] "kube-proxy-2vzps" [01869f53-a897-4a1a-b5be-ceafca2e105b] Running
	I1027 20:15:02.334072   96119 system_pods.go:61] "kube-scheduler-pause-145997" [f0a7175d-b419-42c1-b485-cb5330d0ed07] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 20:15:02.334081   96119 system_pods.go:74] duration metric: took 4.353769ms to wait for pod list to return data ...
	I1027 20:15:02.334098   96119 default_sa.go:34] waiting for default service account to be created ...
	I1027 20:15:02.336913   96119 default_sa.go:45] found service account: "default"
	I1027 20:15:02.336934   96119 default_sa.go:55] duration metric: took 2.828181ms for default service account to be created ...
	I1027 20:15:02.336945   96119 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 20:15:02.340766   96119 system_pods.go:86] 6 kube-system pods found
	I1027 20:15:02.340798   96119 system_pods.go:89] "coredns-66bc5c9577-4qs4m" [92c6d26c-1ff4-4a98-b0f6-963244a8a802] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:15:02.340809   96119 system_pods.go:89] "etcd-pause-145997" [08d8f65d-3056-48ee-9d16-a448d38ba1e1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 20:15:02.340819   96119 system_pods.go:89] "kube-apiserver-pause-145997" [da350639-ea13-402e-8856-3e304c9bc93a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 20:15:02.340829   96119 system_pods.go:89] "kube-controller-manager-pause-145997" [80f89714-5130-4df9-b9b0-bd9cc6bd5b2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 20:15:02.340836   96119 system_pods.go:89] "kube-proxy-2vzps" [01869f53-a897-4a1a-b5be-ceafca2e105b] Running
	I1027 20:15:02.340845   96119 system_pods.go:89] "kube-scheduler-pause-145997" [f0a7175d-b419-42c1-b485-cb5330d0ed07] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 20:15:02.340856   96119 system_pods.go:126] duration metric: took 3.903432ms to wait for k8s-apps to be running ...
	I1027 20:15:02.340870   96119 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 20:15:02.340925   96119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 20:15:02.364236   96119 system_svc.go:56] duration metric: took 23.350055ms WaitForService to wait for kubelet
	I1027 20:15:02.364284   96119 kubeadm.go:586] duration metric: took 329.831656ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 20:15:02.364313   96119 node_conditions.go:102] verifying NodePressure condition ...
	I1027 20:15:02.369380   96119 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1027 20:15:02.369405   96119 node_conditions.go:123] node cpu capacity is 2
	I1027 20:15:02.369421   96119 node_conditions.go:105] duration metric: took 5.100585ms to run NodePressure ...
	I1027 20:15:02.369435   96119 start.go:241] waiting for startup goroutines ...
	I1027 20:15:02.369443   96119 start.go:246] waiting for cluster config update ...
	I1027 20:15:02.369452   96119 start.go:255] writing updated cluster config ...
	I1027 20:15:02.369770   96119 ssh_runner.go:195] Run: rm -f paused
	I1027 20:15:02.375485   96119 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 20:15:02.376524   96119 kapi.go:59] client config for pause-145997: &rest.Config{Host:"https://192.168.72.115:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21801-58821/.minikube/profiles/pause-145997/client.crt", KeyFile:"/home/jenkins/minikube-integration/21801-58821/.minikube/profiles/pause-145997/client.key", CAFile:"/home/jenkins/minikube-integration/21801-58821/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]st
ring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c4e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1027 20:15:02.381315   96119 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4qs4m" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 20:15:04.388537   96119 pod_ready.go:104] pod "coredns-66bc5c9577-4qs4m" is not "Ready", error: <nil>
	I1027 20:15:04.888730   96119 pod_ready.go:94] pod "coredns-66bc5c9577-4qs4m" is "Ready"
	I1027 20:15:04.888759   96119 pod_ready.go:86] duration metric: took 2.507419707s for pod "coredns-66bc5c9577-4qs4m" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:15:04.892547   96119 pod_ready.go:83] waiting for pod "etcd-pause-145997" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:15:04.586885   96708 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 20:15:04.586920   96708 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-58821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 20:15:04.586928   96708 cache.go:58] Caching tarball of preloaded images
	I1027 20:15:04.587044   96708 preload.go:233] Found /home/jenkins/minikube-integration/21801-58821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 20:15:04.587059   96708 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 20:15:04.587138   96708 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/kubernetes-upgrade-176362/config.json ...
	I1027 20:15:04.587355   96708 start.go:360] acquireMachinesLock for kubernetes-upgrade-176362: {Name:mk93a855054c8dcf81931234082a94fdc68a4726 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1027 20:15:07.603089   96708 start.go:364] duration metric: took 3.015688216s to acquireMachinesLock for "kubernetes-upgrade-176362"
	I1027 20:15:07.603166   96708 start.go:96] Skipping create...Using existing machine configuration
	I1027 20:15:07.603176   96708 fix.go:54] fixHost starting: 
	I1027 20:15:07.605511   96708 fix.go:112] recreateIfNeeded on kubernetes-upgrade-176362: state=Stopped err=<nil>
	W1027 20:15:07.605551   96708 fix.go:138] unexpected machine state, will restart: <nil>
	I1027 20:15:05.810027   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:05.810688   96282 main.go:141] libmachine: (stopped-upgrade-246578) found domain IP: 192.168.83.222
	I1027 20:15:05.810704   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has current primary IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:05.810709   96282 main.go:141] libmachine: (stopped-upgrade-246578) reserving static IP address...
	I1027 20:15:05.811192   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | unable to find host DHCP lease matching {name: "stopped-upgrade-246578", mac: "52:54:00:c1:7f:c4", ip: "192.168.83.222"} in network mk-stopped-upgrade-246578
	I1027 20:15:06.042920   96282 main.go:141] libmachine: (stopped-upgrade-246578) reserved static IP address 192.168.83.222 for domain stopped-upgrade-246578
	I1027 20:15:06.042937   96282 main.go:141] libmachine: (stopped-upgrade-246578) waiting for SSH...
	I1027 20:15:06.042956   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | Getting to WaitForSSH function...
	I1027 20:15:06.046557   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:06.047066   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7f:c4", ip: ""} in network mk-stopped-upgrade-246578: {Iface:virbr5 ExpiryTime:2025-10-27 21:15:02 +0000 UTC Type:0 Mac:52:54:00:c1:7f:c4 Iaid: IPaddr:192.168.83.222 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c1:7f:c4}
	I1027 20:15:06.047091   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:06.047240   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | Using SSH client type: external
	I1027 20:15:06.047267   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | Using SSH private key: /home/jenkins/minikube-integration/21801-58821/.minikube/machines/stopped-upgrade-246578/id_rsa (-rw-------)
	I1027 20:15:06.047305   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21801-58821/.minikube/machines/stopped-upgrade-246578/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1027 20:15:06.047320   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | About to run SSH command:
	I1027 20:15:06.047331   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | exit 0
	I1027 20:15:06.145202   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | SSH cmd err, output: <nil>: 
	I1027 20:15:06.145505   96282 main.go:141] libmachine: (stopped-upgrade-246578) domain creation complete
	I1027 20:15:06.145965   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetConfigRaw
	I1027 20:15:06.146542   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .DriverName
	I1027 20:15:06.146747   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .DriverName
	I1027 20:15:06.146937   96282 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1027 20:15:06.146950   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetState
	I1027 20:15:06.148533   96282 main.go:141] libmachine: Detecting operating system of created instance...
	I1027 20:15:06.148542   96282 main.go:141] libmachine: Waiting for SSH to be available...
	I1027 20:15:06.148559   96282 main.go:141] libmachine: Getting to WaitForSSH function...
	I1027 20:15:06.148565   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHHostname
	I1027 20:15:06.150937   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:06.151328   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7f:c4", ip: ""} in network mk-stopped-upgrade-246578: {Iface:virbr5 ExpiryTime:2025-10-27 21:15:02 +0000 UTC Type:0 Mac:52:54:00:c1:7f:c4 Iaid: IPaddr:192.168.83.222 Prefix:24 Hostname:stopped-upgrade-246578 Clientid:01:52:54:00:c1:7f:c4}
	I1027 20:15:06.151354   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:06.151486   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHPort
	I1027 20:15:06.151661   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHKeyPath
	I1027 20:15:06.151813   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHKeyPath
	I1027 20:15:06.151922   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHUsername
	I1027 20:15:06.152094   96282 main.go:141] libmachine: Using SSH client type: native
	I1027 20:15:06.152464   96282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.83.222 22 <nil> <nil>}
	I1027 20:15:06.152475   96282 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1027 20:15:06.273184   96282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 20:15:06.273198   96282 main.go:141] libmachine: Detecting the provisioner...
	I1027 20:15:06.273205   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHHostname
	I1027 20:15:06.276237   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:06.276577   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7f:c4", ip: ""} in network mk-stopped-upgrade-246578: {Iface:virbr5 ExpiryTime:2025-10-27 21:15:02 +0000 UTC Type:0 Mac:52:54:00:c1:7f:c4 Iaid: IPaddr:192.168.83.222 Prefix:24 Hostname:stopped-upgrade-246578 Clientid:01:52:54:00:c1:7f:c4}
	I1027 20:15:06.276603   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:06.276775   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHPort
	I1027 20:15:06.277004   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHKeyPath
	I1027 20:15:06.277221   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHKeyPath
	I1027 20:15:06.277365   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHUsername
	I1027 20:15:06.277529   96282 main.go:141] libmachine: Using SSH client type: native
	I1027 20:15:06.277864   96282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.83.222 22 <nil> <nil>}
	I1027 20:15:06.277874   96282 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1027 20:15:06.401931   96282 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb75713b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1027 20:15:06.401972   96282 main.go:141] libmachine: found compatible host: buildroot
	I1027 20:15:06.401977   96282 main.go:141] libmachine: Provisioning with buildroot...
	I1027 20:15:06.401985   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetMachineName
	I1027 20:15:06.402229   96282 buildroot.go:166] provisioning hostname "stopped-upgrade-246578"
	I1027 20:15:06.402243   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetMachineName
	I1027 20:15:06.402435   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHHostname
	I1027 20:15:06.404995   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:06.405330   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7f:c4", ip: ""} in network mk-stopped-upgrade-246578: {Iface:virbr5 ExpiryTime:2025-10-27 21:15:02 +0000 UTC Type:0 Mac:52:54:00:c1:7f:c4 Iaid: IPaddr:192.168.83.222 Prefix:24 Hostname:stopped-upgrade-246578 Clientid:01:52:54:00:c1:7f:c4}
	I1027 20:15:06.405352   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:06.405546   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHPort
	I1027 20:15:06.405724   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHKeyPath
	I1027 20:15:06.405861   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHKeyPath
	I1027 20:15:06.406005   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHUsername
	I1027 20:15:06.406159   96282 main.go:141] libmachine: Using SSH client type: native
	I1027 20:15:06.406490   96282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.83.222 22 <nil> <nil>}
	I1027 20:15:06.406498   96282 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-246578 && echo "stopped-upgrade-246578" | sudo tee /etc/hostname
	I1027 20:15:06.556193   96282 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-246578
	
	I1027 20:15:06.556214   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHHostname
	I1027 20:15:06.559703   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:06.560192   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7f:c4", ip: ""} in network mk-stopped-upgrade-246578: {Iface:virbr5 ExpiryTime:2025-10-27 21:15:02 +0000 UTC Type:0 Mac:52:54:00:c1:7f:c4 Iaid: IPaddr:192.168.83.222 Prefix:24 Hostname:stopped-upgrade-246578 Clientid:01:52:54:00:c1:7f:c4}
	I1027 20:15:06.560234   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:06.560506   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHPort
	I1027 20:15:06.560682   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHKeyPath
	I1027 20:15:06.560863   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHKeyPath
	I1027 20:15:06.561112   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHUsername
	I1027 20:15:06.561310   96282 main.go:141] libmachine: Using SSH client type: native
	I1027 20:15:06.561615   96282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.83.222 22 <nil> <nil>}
	I1027 20:15:06.561629   96282 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-246578' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-246578/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-246578' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 20:15:06.692296   96282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 20:15:06.692319   96282 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21801-58821/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-58821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-58821/.minikube}
	I1027 20:15:06.692355   96282 buildroot.go:174] setting up certificates
	I1027 20:15:06.692377   96282 provision.go:83] configureAuth start
	I1027 20:15:06.692387   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetMachineName
	I1027 20:15:06.692671   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetIP
	I1027 20:15:06.696113   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:06.696507   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7f:c4", ip: ""} in network mk-stopped-upgrade-246578: {Iface:virbr5 ExpiryTime:2025-10-27 21:15:02 +0000 UTC Type:0 Mac:52:54:00:c1:7f:c4 Iaid: IPaddr:192.168.83.222 Prefix:24 Hostname:stopped-upgrade-246578 Clientid:01:52:54:00:c1:7f:c4}
	I1027 20:15:06.696523   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:06.696723   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHHostname
	I1027 20:15:06.699449   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:06.699766   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7f:c4", ip: ""} in network mk-stopped-upgrade-246578: {Iface:virbr5 ExpiryTime:2025-10-27 21:15:02 +0000 UTC Type:0 Mac:52:54:00:c1:7f:c4 Iaid: IPaddr:192.168.83.222 Prefix:24 Hostname:stopped-upgrade-246578 Clientid:01:52:54:00:c1:7f:c4}
	I1027 20:15:06.699781   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:06.699925   96282 provision.go:138] copyHostCerts
	I1027 20:15:06.699974   96282 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-58821/.minikube/key.pem, removing ...
	I1027 20:15:06.699991   96282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-58821/.minikube/key.pem
	I1027 20:15:06.700094   96282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-58821/.minikube/key.pem (1675 bytes)
	I1027 20:15:06.700202   96282 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-58821/.minikube/ca.pem, removing ...
	I1027 20:15:06.700207   96282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-58821/.minikube/ca.pem
	I1027 20:15:06.700234   96282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-58821/.minikube/ca.pem (1078 bytes)
	I1027 20:15:06.700295   96282 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-58821/.minikube/cert.pem, removing ...
	I1027 20:15:06.700297   96282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-58821/.minikube/cert.pem
	I1027 20:15:06.700318   96282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-58821/.minikube/cert.pem (1123 bytes)
	I1027 20:15:06.700361   96282 provision.go:112] generating server cert: /home/jenkins/minikube-integration/21801-58821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-246578 san=[192.168.83.222 192.168.83.222 localhost 127.0.0.1 minikube stopped-upgrade-246578]
	I1027 20:15:06.849826   96282 provision.go:172] copyRemoteCerts
	I1027 20:15:06.849874   96282 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 20:15:06.849913   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHHostname
	I1027 20:15:06.853169   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:06.853605   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7f:c4", ip: ""} in network mk-stopped-upgrade-246578: {Iface:virbr5 ExpiryTime:2025-10-27 21:15:02 +0000 UTC Type:0 Mac:52:54:00:c1:7f:c4 Iaid: IPaddr:192.168.83.222 Prefix:24 Hostname:stopped-upgrade-246578 Clientid:01:52:54:00:c1:7f:c4}
	I1027 20:15:06.853637   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:06.853896   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHPort
	I1027 20:15:06.854118   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHKeyPath
	I1027 20:15:06.854260   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHUsername
	I1027 20:15:06.854405   96282 sshutil.go:53] new ssh client: &{IP:192.168.83.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/stopped-upgrade-246578/id_rsa Username:docker}
	I1027 20:15:06.944530   96282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 20:15:06.966875   96282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 20:15:06.987586   96282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1027 20:15:07.010450   96282 provision.go:86] duration metric: configureAuth took 318.061898ms
	I1027 20:15:07.010468   96282 buildroot.go:189] setting minikube options for container-runtime
	I1027 20:15:07.010668   96282 config.go:182] Loaded profile config "stopped-upgrade-246578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1027 20:15:07.010776   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHHostname
	I1027 20:15:07.014139   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:07.014545   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7f:c4", ip: ""} in network mk-stopped-upgrade-246578: {Iface:virbr5 ExpiryTime:2025-10-27 21:15:02 +0000 UTC Type:0 Mac:52:54:00:c1:7f:c4 Iaid: IPaddr:192.168.83.222 Prefix:24 Hostname:stopped-upgrade-246578 Clientid:01:52:54:00:c1:7f:c4}
	I1027 20:15:07.014565   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:07.014888   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHPort
	I1027 20:15:07.015127   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHKeyPath
	I1027 20:15:07.015285   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHKeyPath
	I1027 20:15:07.015470   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHUsername
	I1027 20:15:07.015653   96282 main.go:141] libmachine: Using SSH client type: native
	I1027 20:15:07.016109   96282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.83.222 22 <nil> <nil>}
	I1027 20:15:07.016125   96282 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 20:15:07.336063   96282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 20:15:07.336083   96282 main.go:141] libmachine: Checking connection to Docker...
	I1027 20:15:07.336092   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetURL
	I1027 20:15:07.337504   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | using libvirt version 8000000
	I1027 20:15:07.340241   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:07.340580   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7f:c4", ip: ""} in network mk-stopped-upgrade-246578: {Iface:virbr5 ExpiryTime:2025-10-27 21:15:02 +0000 UTC Type:0 Mac:52:54:00:c1:7f:c4 Iaid: IPaddr:192.168.83.222 Prefix:24 Hostname:stopped-upgrade-246578 Clientid:01:52:54:00:c1:7f:c4}
	I1027 20:15:07.340603   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:07.340760   96282 main.go:141] libmachine: Docker is up and running!
	I1027 20:15:07.340771   96282 main.go:141] libmachine: Reticulating splines...
	I1027 20:15:07.340778   96282 client.go:171] LocalClient.Create took 21.925141515s
	I1027 20:15:07.340797   96282 start.go:167] duration metric: libmachine.API.Create for "stopped-upgrade-246578" took 21.925209104s
	I1027 20:15:07.340803   96282 start.go:300] post-start starting for "stopped-upgrade-246578" (driver="kvm2")
	I1027 20:15:07.340827   96282 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 20:15:07.340840   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .DriverName
	I1027 20:15:07.341078   96282 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 20:15:07.341094   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHHostname
	I1027 20:15:07.343429   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:07.343792   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7f:c4", ip: ""} in network mk-stopped-upgrade-246578: {Iface:virbr5 ExpiryTime:2025-10-27 21:15:02 +0000 UTC Type:0 Mac:52:54:00:c1:7f:c4 Iaid: IPaddr:192.168.83.222 Prefix:24 Hostname:stopped-upgrade-246578 Clientid:01:52:54:00:c1:7f:c4}
	I1027 20:15:07.343816   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:07.344022   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHPort
	I1027 20:15:07.344195   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHKeyPath
	I1027 20:15:07.344380   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHUsername
	I1027 20:15:07.344514   96282 sshutil.go:53] new ssh client: &{IP:192.168.83.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/stopped-upgrade-246578/id_rsa Username:docker}
	I1027 20:15:07.435501   96282 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 20:15:07.439398   96282 info.go:137] Remote host: Buildroot 2021.02.12
	I1027 20:15:07.439411   96282 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-58821/.minikube/addons for local assets ...
	I1027 20:15:07.439466   96282 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-58821/.minikube/files for local assets ...
	I1027 20:15:07.439547   96282 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-58821/.minikube/files/etc/ssl/certs/627052.pem -> 627052.pem in /etc/ssl/certs
	I1027 20:15:07.439631   96282 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 20:15:07.447247   96282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/files/etc/ssl/certs/627052.pem --> /etc/ssl/certs/627052.pem (1708 bytes)
	I1027 20:15:07.468745   96282 start.go:303] post-start completed in 127.930976ms
	I1027 20:15:07.468790   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetConfigRaw
	I1027 20:15:07.469439   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetIP
	I1027 20:15:07.472529   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:07.472831   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7f:c4", ip: ""} in network mk-stopped-upgrade-246578: {Iface:virbr5 ExpiryTime:2025-10-27 21:15:02 +0000 UTC Type:0 Mac:52:54:00:c1:7f:c4 Iaid: IPaddr:192.168.83.222 Prefix:24 Hostname:stopped-upgrade-246578 Clientid:01:52:54:00:c1:7f:c4}
	I1027 20:15:07.472855   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:07.473070   96282 profile.go:148] Saving config to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/config.json ...
	I1027 20:15:07.473245   96282 start.go:128] duration metric: createHost completed in 22.081515892s
	I1027 20:15:07.473262   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHHostname
	I1027 20:15:07.475765   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:07.476237   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7f:c4", ip: ""} in network mk-stopped-upgrade-246578: {Iface:virbr5 ExpiryTime:2025-10-27 21:15:02 +0000 UTC Type:0 Mac:52:54:00:c1:7f:c4 Iaid: IPaddr:192.168.83.222 Prefix:24 Hostname:stopped-upgrade-246578 Clientid:01:52:54:00:c1:7f:c4}
	I1027 20:15:07.476265   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:07.476513   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHPort
	I1027 20:15:07.476719   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHKeyPath
	I1027 20:15:07.476939   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHKeyPath
	I1027 20:15:07.477118   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHUsername
	I1027 20:15:07.477295   96282 main.go:141] libmachine: Using SSH client type: native
	I1027 20:15:07.477746   96282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.83.222 22 <nil> <nil>}
	I1027 20:15:07.477756   96282 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1027 20:15:07.602885   96282 main.go:141] libmachine: SSH cmd err, output: <nil>: 1761596107.570464034
	
	I1027 20:15:07.602898   96282 fix.go:206] guest clock: 1761596107.570464034
	I1027 20:15:07.602903   96282 fix.go:219] Guest: 2025-10-27 20:15:07.570464034 +0000 UTC Remote: 2025-10-27 20:15:07.473250399 +0000 UTC m=+47.984902949 (delta=97.213635ms)
	I1027 20:15:07.602967   96282 fix.go:190] guest clock delta is within tolerance: 97.213635ms
	I1027 20:15:07.602972   96282 start.go:83] releasing machines lock for "stopped-upgrade-246578", held for 22.211430644s
	I1027 20:15:07.603004   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .DriverName
	I1027 20:15:07.603303   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetIP
	I1027 20:15:07.607089   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:07.607511   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7f:c4", ip: ""} in network mk-stopped-upgrade-246578: {Iface:virbr5 ExpiryTime:2025-10-27 21:15:02 +0000 UTC Type:0 Mac:52:54:00:c1:7f:c4 Iaid: IPaddr:192.168.83.222 Prefix:24 Hostname:stopped-upgrade-246578 Clientid:01:52:54:00:c1:7f:c4}
	I1027 20:15:07.607538   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:07.607753   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .DriverName
	I1027 20:15:07.608310   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .DriverName
	I1027 20:15:07.608500   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .DriverName
	I1027 20:15:07.608626   96282 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 20:15:07.608666   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHHostname
	I1027 20:15:07.608734   96282 ssh_runner.go:195] Run: cat /version.json
	I1027 20:15:07.608754   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHHostname
	I1027 20:15:07.612785   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:07.612808   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:07.613320   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7f:c4", ip: ""} in network mk-stopped-upgrade-246578: {Iface:virbr5 ExpiryTime:2025-10-27 21:15:02 +0000 UTC Type:0 Mac:52:54:00:c1:7f:c4 Iaid: IPaddr:192.168.83.222 Prefix:24 Hostname:stopped-upgrade-246578 Clientid:01:52:54:00:c1:7f:c4}
	I1027 20:15:07.613356   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7f:c4", ip: ""} in network mk-stopped-upgrade-246578: {Iface:virbr5 ExpiryTime:2025-10-27 21:15:02 +0000 UTC Type:0 Mac:52:54:00:c1:7f:c4 Iaid: IPaddr:192.168.83.222 Prefix:24 Hostname:stopped-upgrade-246578 Clientid:01:52:54:00:c1:7f:c4}
	I1027 20:15:07.613380   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:07.613393   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:07.613567   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHPort
	I1027 20:15:07.613587   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHPort
	I1027 20:15:07.613767   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHKeyPath
	I1027 20:15:07.613843   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHKeyPath
	I1027 20:15:07.613974   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHUsername
	I1027 20:15:07.614042   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHUsername
	I1027 20:15:07.614129   96282 sshutil.go:53] new ssh client: &{IP:192.168.83.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/stopped-upgrade-246578/id_rsa Username:docker}
	I1027 20:15:07.614213   96282 sshutil.go:53] new ssh client: &{IP:192.168.83.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/stopped-upgrade-246578/id_rsa Username:docker}
	I1027 20:15:07.725402   96282 ssh_runner.go:195] Run: systemctl --version
	I1027 20:15:07.730958   96282 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 20:15:07.887642   96282 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 20:15:07.894972   96282 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 20:15:07.895055   96282 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 20:15:07.910658   96282 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1027 20:15:07.910674   96282 start.go:472] detecting cgroup driver to use...
	I1027 20:15:07.910753   96282 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 20:15:07.923537   96282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 20:15:07.935968   96282 docker.go:203] disabling cri-docker service (if available) ...
	I1027 20:15:07.936018   96282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 20:15:07.949195   96282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 20:15:07.961490   96282 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 20:15:08.066455   96282 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 20:15:08.194428   96282 docker.go:219] disabling docker service ...
	I1027 20:15:08.194501   96282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 20:15:08.206962   96282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 20:15:08.218578   96282 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 20:15:08.340254   96282 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 20:15:08.457710   96282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 20:15:08.470774   96282 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 20:15:08.489461   96282 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1027 20:15:08.489532   96282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:15:08.499829   96282 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 20:15:08.499892   96282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:15:08.509112   96282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:15:08.517943   96282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:15:08.527355   96282 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 20:15:08.537556   96282 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 20:15:08.548561   96282 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1027 20:15:08.548611   96282 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1027 20:15:08.564195   96282 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 20:15:08.573419   96282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:15:08.689825   96282 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 20:15:08.874054   96282 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 20:15:08.874123   96282 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 20:15:08.881103   96282 start.go:540] Will wait 60s for crictl version
	I1027 20:15:08.881168   96282 ssh_runner.go:195] Run: which crictl
	I1027 20:15:08.885642   96282 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1027 20:15:08.941811   96282 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1027 20:15:08.941889   96282 ssh_runner.go:195] Run: crio --version
	I1027 20:15:08.992511   96282 ssh_runner.go:195] Run: crio --version
	I1027 20:15:09.056062   96282 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1027 20:15:07.607801   96708 out.go:252] * Restarting existing kvm2 VM for "kubernetes-upgrade-176362" ...
	I1027 20:15:07.607850   96708 main.go:141] libmachine: starting domain...
	I1027 20:15:07.607867   96708 main.go:141] libmachine: ensuring networks are active...
	I1027 20:15:07.608762   96708 main.go:141] libmachine: Ensuring network default is active
	I1027 20:15:07.609606   96708 main.go:141] libmachine: Ensuring network mk-kubernetes-upgrade-176362 is active
	I1027 20:15:07.610483   96708 main.go:141] libmachine: getting domain XML...
	I1027 20:15:07.612234   96708 main.go:141] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>kubernetes-upgrade-176362</name>
	  <uuid>e148d596-c2b6-4fd1-9e6b-e918d164691e</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21801-58821/.minikube/machines/kubernetes-upgrade-176362/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21801-58821/.minikube/machines/kubernetes-upgrade-176362/kubernetes-upgrade-176362.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:57:ab:c2'/>
	      <source network='mk-kubernetes-upgrade-176362'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:e6:58:09'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1027 20:15:09.031292   96708 main.go:141] libmachine: waiting for domain to start...
	I1027 20:15:09.033104   96708 main.go:141] libmachine: domain is now running
	I1027 20:15:09.033130   96708 main.go:141] libmachine: waiting for IP...
	I1027 20:15:09.034267   96708 main.go:141] libmachine: domain kubernetes-upgrade-176362 has defined MAC address 52:54:00:57:ab:c2 in network mk-kubernetes-upgrade-176362
	I1027 20:15:09.034928   96708 main.go:141] libmachine: domain kubernetes-upgrade-176362 has current primary IP address 192.168.61.42 and MAC address 52:54:00:57:ab:c2 in network mk-kubernetes-upgrade-176362
	I1027 20:15:09.034944   96708 main.go:141] libmachine: found domain IP: 192.168.61.42
	I1027 20:15:09.034949   96708 main.go:141] libmachine: reserving static IP address...
	I1027 20:15:09.035359   96708 main.go:141] libmachine: found host DHCP lease matching {name: "kubernetes-upgrade-176362", mac: "52:54:00:57:ab:c2", ip: "192.168.61.42"} in network mk-kubernetes-upgrade-176362: {Iface:virbr3 ExpiryTime:2025-10-27 21:14:32 +0000 UTC Type:0 Mac:52:54:00:57:ab:c2 Iaid: IPaddr:192.168.61.42 Prefix:24 Hostname:kubernetes-upgrade-176362 Clientid:01:52:54:00:57:ab:c2}
	I1027 20:15:09.035395   96708 main.go:141] libmachine: skip adding static IP to network mk-kubernetes-upgrade-176362 - found existing host DHCP lease matching {name: "kubernetes-upgrade-176362", mac: "52:54:00:57:ab:c2", ip: "192.168.61.42"}
	I1027 20:15:09.035407   96708 main.go:141] libmachine: reserved static IP address 192.168.61.42 for domain kubernetes-upgrade-176362
	I1027 20:15:09.035415   96708 main.go:141] libmachine: waiting for SSH...
	I1027 20:15:09.035423   96708 main.go:141] libmachine: Getting to WaitForSSH function...
	I1027 20:15:09.037760   96708 main.go:141] libmachine: domain kubernetes-upgrade-176362 has defined MAC address 52:54:00:57:ab:c2 in network mk-kubernetes-upgrade-176362
	I1027 20:15:09.038174   96708 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:57:ab:c2", ip: ""} in network mk-kubernetes-upgrade-176362: {Iface:virbr3 ExpiryTime:2025-10-27 21:14:32 +0000 UTC Type:0 Mac:52:54:00:57:ab:c2 Iaid: IPaddr:192.168.61.42 Prefix:24 Hostname:kubernetes-upgrade-176362 Clientid:01:52:54:00:57:ab:c2}
	I1027 20:15:09.038221   96708 main.go:141] libmachine: domain kubernetes-upgrade-176362 has defined IP address 192.168.61.42 and MAC address 52:54:00:57:ab:c2 in network mk-kubernetes-upgrade-176362
	I1027 20:15:09.038480   96708 main.go:141] libmachine: Using SSH client type: native
	I1027 20:15:09.038721   96708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.42 22 <nil> <nil>}
	I1027 20:15:09.038733   96708 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1027 20:15:09.057576   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetIP
	I1027 20:15:09.061310   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:09.061703   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7f:c4", ip: ""} in network mk-stopped-upgrade-246578: {Iface:virbr5 ExpiryTime:2025-10-27 21:15:02 +0000 UTC Type:0 Mac:52:54:00:c1:7f:c4 Iaid: IPaddr:192.168.83.222 Prefix:24 Hostname:stopped-upgrade-246578 Clientid:01:52:54:00:c1:7f:c4}
	I1027 20:15:09.061724   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:09.061989   96282 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1027 20:15:09.065891   96282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 20:15:09.077422   96282 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1027 20:15:09.077469   96282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 20:15:09.116175   96282 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1027 20:15:09.116231   96282 ssh_runner.go:195] Run: which lz4
	I1027 20:15:09.120394   96282 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1027 20:15:09.124438   96282 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1027 20:15:09.124456   96282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	W1027 20:15:06.898563   96119 pod_ready.go:104] pod "etcd-pause-145997" is not "Ready", error: <nil>
	I1027 20:15:08.902685   96119 pod_ready.go:94] pod "etcd-pause-145997" is "Ready"
	I1027 20:15:08.902729   96119 pod_ready.go:86] duration metric: took 4.010150553s for pod "etcd-pause-145997" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:15:08.905784   96119 pod_ready.go:83] waiting for pod "kube-apiserver-pause-145997" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:15:08.913576   96119 pod_ready.go:94] pod "kube-apiserver-pause-145997" is "Ready"
	I1027 20:15:08.913610   96119 pod_ready.go:86] duration metric: took 7.79699ms for pod "kube-apiserver-pause-145997" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:15:08.916604   96119 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-145997" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 20:15:10.925854   96119 pod_ready.go:104] pod "kube-controller-manager-pause-145997" is not "Ready", error: <nil>
	I1027 20:15:12.097380   96708 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.42:22: connect: no route to host
	I1027 20:15:10.887190   96282 crio.go:444] Took 1.766838 seconds to copy over tarball
	I1027 20:15:10.887295   96282 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1027 20:15:14.028904   96282 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.141579507s)
	I1027 20:15:14.028924   96282 crio.go:451] Took 3.141714 seconds to extract the tarball
	I1027 20:15:14.028936   96282 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1027 20:15:14.071776   96282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 20:15:14.151505   96282 crio.go:496] all images are preloaded for cri-o runtime.
	I1027 20:15:14.151517   96282 cache_images.go:84] Images are preloaded, skipping loading
	I1027 20:15:14.151589   96282 ssh_runner.go:195] Run: crio config
	I1027 20:15:14.215896   96282 cni.go:84] Creating CNI manager for ""
	I1027 20:15:14.215909   96282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 20:15:14.215927   96282 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1027 20:15:14.215958   96282 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.222 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-246578 NodeName:stopped-upgrade-246578 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.222 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 20:15:14.216133   96282 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.222
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "stopped-upgrade-246578"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.222
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.222"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 20:15:14.216225   96282 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=stopped-upgrade-246578 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-246578 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1027 20:15:14.216293   96282 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1027 20:15:14.225783   96282 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 20:15:14.225860   96282 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 20:15:14.234791   96282 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1027 20:15:14.251029   96282 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 20:15:14.268212   96282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1027 20:15:14.284083   96282 ssh_runner.go:195] Run: grep 192.168.83.222	control-plane.minikube.internal$ /etc/hosts
	I1027 20:15:14.287791   96282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.222	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 20:15:14.301196   96282 certs.go:56] Setting up /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578 for IP: 192.168.83.222
	I1027 20:15:14.301227   96282 certs.go:190] acquiring lock for shared ca certs: {Name:mk3c1c890b4611f9f1a3f97b9046837227a16799 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:15:14.301444   96282 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/21801-58821/.minikube/ca.key
	I1027 20:15:14.301492   96282 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.key
	I1027 20:15:14.301557   96282 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/client.key
	I1027 20:15:14.301568   96282 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/client.crt with IP's: []
	I1027 20:15:14.489307   96282 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/client.crt ...
	I1027 20:15:14.489323   96282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/client.crt: {Name:mk755ae076ac43dc43189a5fb5358bcae2fe7a04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:15:14.489517   96282 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/client.key ...
	I1027 20:15:14.489528   96282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/client.key: {Name:mk1dc8a073e66d33f7a98f520571a41100e4505a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:15:14.489638   96282 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/apiserver.key.8a8d8ec9
	I1027 20:15:14.489650   96282 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/apiserver.crt.8a8d8ec9 with IP's: [192.168.83.222 10.96.0.1 127.0.0.1 10.0.0.1]
	I1027 20:15:14.594480   96282 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/apiserver.crt.8a8d8ec9 ...
	I1027 20:15:14.600621   96282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/apiserver.crt.8a8d8ec9: {Name:mk19fca7a59711d23b2be8d803a7b7e574e9f9d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:15:14.600825   96282 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/apiserver.key.8a8d8ec9 ...
	I1027 20:15:14.600837   96282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/apiserver.key.8a8d8ec9: {Name:mkfd081bbc2ed80d30a3842a5527caf2b2c0e583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:15:14.600948   96282 certs.go:337] copying /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/apiserver.crt.8a8d8ec9 -> /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/apiserver.crt
	I1027 20:15:14.601093   96282 certs.go:341] copying /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/apiserver.key.8a8d8ec9 -> /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/apiserver.key
	I1027 20:15:14.601190   96282 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/proxy-client.key
	I1027 20:15:14.601216   96282 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/proxy-client.crt with IP's: []
	I1027 20:15:14.877587   96282 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/proxy-client.crt ...
	I1027 20:15:14.877607   96282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/proxy-client.crt: {Name:mk1d482ce21ea3fcfe3b8b544a03273f71518acd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:15:14.877815   96282 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/proxy-client.key ...
	I1027 20:15:14.877830   96282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/proxy-client.key: {Name:mk8c9ddf15b2b0adf0da95b462fdcce69accd502 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:15:14.878023   96282 certs.go:437] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/home/jenkins/minikube-integration/21801-58821/.minikube/certs/62705.pem (1338 bytes)
	W1027 20:15:14.878070   96282 certs.go:433] ignoring /home/jenkins/minikube-integration/21801-58821/.minikube/certs/home/jenkins/minikube-integration/21801-58821/.minikube/certs/62705_empty.pem, impossibly tiny 0 bytes
	I1027 20:15:14.878078   96282 certs.go:437] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca-key.pem (1679 bytes)
	I1027 20:15:14.878107   96282 certs.go:437] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem (1078 bytes)
	I1027 20:15:14.878126   96282 certs.go:437] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/home/jenkins/minikube-integration/21801-58821/.minikube/certs/cert.pem (1123 bytes)
	I1027 20:15:14.878152   96282 certs.go:437] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/home/jenkins/minikube-integration/21801-58821/.minikube/certs/key.pem (1675 bytes)
	I1027 20:15:14.878186   96282 certs.go:437] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/21801-58821/.minikube/files/etc/ssl/certs/627052.pem (1708 bytes)
	I1027 20:15:14.878780   96282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1027 20:15:14.905701   96282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 20:15:14.929825   96282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 20:15:14.954620   96282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 20:15:14.980957   96282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 20:15:15.006702   96282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 20:15:15.029601   96282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 20:15:15.057212   96282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1027 20:15:15.084554   96282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/files/etc/ssl/certs/627052.pem --> /usr/share/ca-certificates/627052.pem (1708 bytes)
	I1027 20:15:15.110594   96282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 20:15:15.136286   96282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/certs/62705.pem --> /usr/share/ca-certificates/62705.pem (1338 bytes)
	I1027 20:15:15.166301   96282 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 20:15:15.183740   96282 ssh_runner.go:195] Run: openssl version
	I1027 20:15:15.189564   96282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/627052.pem && ln -fs /usr/share/ca-certificates/627052.pem /etc/ssl/certs/627052.pem"
	I1027 20:15:15.199445   96282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/627052.pem
	I1027 20:15:15.203896   96282 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:09 /usr/share/ca-certificates/627052.pem
	I1027 20:15:15.203943   96282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/627052.pem
	I1027 20:15:15.209855   96282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/627052.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 20:15:15.220182   96282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 20:15:15.230744   96282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:15:15.235609   96282 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:15:15.235654   96282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:15:15.241084   96282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 20:15:15.250910   96282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/62705.pem && ln -fs /usr/share/ca-certificates/62705.pem /etc/ssl/certs/62705.pem"
	I1027 20:15:15.260760   96282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/62705.pem
	I1027 20:15:15.265971   96282 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:09 /usr/share/ca-certificates/62705.pem
	I1027 20:15:15.266030   96282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/62705.pem
	I1027 20:15:15.273540   96282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/62705.pem /etc/ssl/certs/51391683.0"
	I1027 20:15:15.284412   96282 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1027 20:15:15.288765   96282 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1027 20:15:15.288829   96282 kubeadm.go:404] StartCluster: {Name:stopped-upgrade-246578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-246578 Name
space:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.222 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1027 20:15:15.288923   96282 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 20:15:15.289000   96282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 20:15:15.336268   96282 cri.go:89] found id: ""
	I1027 20:15:15.336336   96282 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 20:15:15.346609   96282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 20:15:15.356305   96282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 20:15:15.366380   96282 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 20:15:15.366417   96282 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1027 20:15:15.440218   96282 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1027 20:15:15.440277   96282 kubeadm.go:322] [preflight] Running pre-flight checks
	I1027 20:15:15.602046   96282 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 20:15:15.602186   96282 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 20:15:15.602334   96282 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1027 20:15:15.849703   96282 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 20:15:12.425507   96119 pod_ready.go:94] pod "kube-controller-manager-pause-145997" is "Ready"
	I1027 20:15:12.425549   96119 pod_ready.go:86] duration metric: took 3.508916207s for pod "kube-controller-manager-pause-145997" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:15:12.429697   96119 pod_ready.go:83] waiting for pod "kube-proxy-2vzps" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:15:12.435281   96119 pod_ready.go:94] pod "kube-proxy-2vzps" is "Ready"
	I1027 20:15:12.435312   96119 pod_ready.go:86] duration metric: took 5.557418ms for pod "kube-proxy-2vzps" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:15:12.438660   96119 pod_ready.go:83] waiting for pod "kube-scheduler-pause-145997" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 20:15:14.445813   96119 pod_ready.go:104] pod "kube-scheduler-pause-145997" is not "Ready", error: <nil>
	I1027 20:15:16.102986   96119 pod_ready.go:94] pod "kube-scheduler-pause-145997" is "Ready"
	I1027 20:15:16.103023   96119 pod_ready.go:86] duration metric: took 3.664335129s for pod "kube-scheduler-pause-145997" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:15:16.103058   96119 pod_ready.go:40] duration metric: took 13.72752845s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 20:15:16.153280   96119 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 20:15:16.206577   96119 out.go:179] * Done! kubectl is now configured to use "pause-145997" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 20:15:17 pause-145997 crio[2838]: time="2025-10-27 20:15:17.420387072Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0376587f-abb9-4e19-8923-ee19ba443e77 name=/runtime.v1.RuntimeService/Version
	Oct 27 20:15:17 pause-145997 crio[2838]: time="2025-10-27 20:15:17.422505791Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=14cc95bf-a427-4e15-ac72-adc3c84f7f4a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 20:15:17 pause-145997 crio[2838]: time="2025-10-27 20:15:17.423213377Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761596117423153801,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=14cc95bf-a427-4e15-ac72-adc3c84f7f4a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 20:15:17 pause-145997 crio[2838]: time="2025-10-27 20:15:17.423871233Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6de23571-ebc5-412d-a039-b4b3d26f2797 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 20:15:17 pause-145997 crio[2838]: time="2025-10-27 20:15:17.423970238Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6de23571-ebc5-412d-a039-b4b3d26f2797 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 20:15:17 pause-145997 crio[2838]: time="2025-10-27 20:15:17.424350555Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf534bdbf58dd599671a66c54a6c649259aaa51f304334c708aa8dc1e84e82f2,PodSandboxId:70925c8a68ab964f50fdd8afc1abb18e86c67f877a8115d17fc6bd7a8928425e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761596101235915519,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4qs4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92c6d26c-1ff4-4a98-b0f6-963244a8a802,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69d91656fd6e4394466a5c7ea98154b54ea6285173eb592a88a21d74ebb5c104,PodSandboxId:44d0a969409735462de9a3a698817537af734d07a550bceeb1ad0c9c2b4ce8b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761596100990355573,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2vzps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01869f53-a897-4a1a-b5be-ceafca2e105b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17aaf053be8fe790f733518dc047995ba09979759f285b06bdb9aa50da6d1c4a,PodSandboxId:f84f54fa556b6b50dd5b89b4133b7bedca44d5610a7711e02c0acb940485b6d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761596096111718267,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7978fd7c60ce7274985091dc8bc428f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32fdda72d1f3e3c77707054ccc9df8291686fb4736e4d3bc4f25ffb6249f5846,PodSandboxId:cbc1df63a2d7f4ecb6b0b59053743306b74617aa2c32f19cc0c2fa9009177763,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c283373
88d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761596096139574279,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5723c50c3a0cc61b3bdf541867db4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c474d221a72dda544da83f00d27e1a654e604411e2a1387afc6d6e8126f660,PodSandboxId:59e2bb88f2cbfe809ed5c8593501ae20f9050e865ac5e39b1feacd694084a043,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761596096099941766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd75f8f8b3c018aefd65cc6e9837f750,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c1d044ae6575530689a8969a4f11c42ef57f8fc1f2a96ca5cbcab22ed998f39,PodSandboxId:1d26ac923453bf72cc9bdc1f3160bc67c99d6f2f7743bc84097396144af1eac7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Imag
e:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761596090258335964,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dbd60e6128b2bcb6ef173322a403223,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:921e5f1ba1c1b8110a1f33cd1ada6314ca2ed64bf4b8c5d850a6c716894257ca,PodSandboxId:59e2bb88f2cbfe809ed5c8593501ae20f9050e865ac5e3
9b1feacd694084a043,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761596089661596786,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd75f8f8b3c018aefd65cc6e9837f750,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a139373de7fd05501114a4
995b989c4548d3ce9876179050be3b9f77ea24633a,PodSandboxId:cbc1df63a2d7f4ecb6b0b59053743306b74617aa2c32f19cc0c2fa9009177763,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761596089654549201,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5723c50c3a0cc61b3bdf541867db4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fc82ec50104917bca670b2e4ac750cd14acc60d37d1675ff8b4e216c8d44a9a,PodSandboxId:f84f54fa556b6b50dd5b89b4133b7bedca44d5610a7711e02c0acb940485b6d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761596089644221791,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7978fd7c60ce7274985091dc8bc428f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59a7505d232443bca3d02905fc8c66a4ee8f6a5d5097c42c474de3698a8cd5bf,PodSandboxId:c34606fc8cf74781d081c72a7a145eb9e0aab5c862d800e0db485829fcd71ab1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761596042787123308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4qs4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92c6d26c-1ff4-4a98-b0f6-963244a8a802,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns
\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4fe26d13f640798c6ea20dd6f7972ec4eae0c5d48d146fd3606e571a939563c,PodSandboxId:76b8dcf42698b5242d0cc43c49cd27e14b6c3605b4d34a8fca47529942f93f84,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:17615
96041297297523,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2vzps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01869f53-a897-4a1a-b5be-ceafca2e105b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f5d1271f2e4f30bdedf83cb576646bc27f6c3a809fec4473a753ba2f1601af1,PodSandboxId:03c81f73b76a7cab6a80ca5ad4d3eb2d83dd2fb19745f7cb0c8f9ca5e1cba3fe,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761596029670447348,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dbd60e6128b2bcb6ef173322a403223,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6de23571-ebc5-412d-a039-b4b3d26f2797 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 20:15:17 pause-145997 crio[2838]: time="2025-10-27 20:15:17.473119417Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e59bba27-6a8e-4776-a968-8b63f4bf155b name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 27 20:15:17 pause-145997 crio[2838]: time="2025-10-27 20:15:17.474360828Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:44d0a969409735462de9a3a698817537af734d07a550bceeb1ad0c9c2b4ce8b4,Metadata:&PodSandboxMetadata{Name:kube-proxy-2vzps,Uid:01869f53-a897-4a1a-b5be-ceafca2e105b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1761596100726900594,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-2vzps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01869f53-a897-4a1a-b5be-ceafca2e105b,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T20:15:00.387835201Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:70925c8a68ab964f50fdd8afc1abb18e86c67f877a8115d17fc6bd7a8928425e,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-4qs4m,Uid:92c6d26c-1ff4-4a98-b0f6-963244a8a802,N
amespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1761596100724609462,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-4qs4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92c6d26c-1ff4-4a98-b0f6-963244a8a802,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-27T20:15:00.387826028Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1d26ac923453bf72cc9bdc1f3160bc67c99d6f2f7743bc84097396144af1eac7,Metadata:&PodSandboxMetadata{Name:etcd-pause-145997,Uid:8dbd60e6128b2bcb6ef173322a403223,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1761596089796637122,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dbd60e6128b2bcb6ef173322a403223,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/
etcd.advertise-client-urls: https://192.168.72.115:2379,kubernetes.io/config.hash: 8dbd60e6128b2bcb6ef173322a403223,kubernetes.io/config.seen: 2025-10-27T20:13:55.577217978Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:59e2bb88f2cbfe809ed5c8593501ae20f9050e865ac5e39b1feacd694084a043,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-145997,Uid:fd75f8f8b3c018aefd65cc6e9837f750,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1761596089032981873,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd75f8f8b3c018aefd65cc6e9837f750,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.115:8443,kubernetes.io/config.hash: fd75f8f8b3c018aefd65cc6e9837f750,kubernetes.io/config.seen: 2025-10-27T20:13:55.577220967Z,kubernetes.io/config.source: file,},RuntimeH
andler:,},&PodSandbox{Id:cbc1df63a2d7f4ecb6b0b59053743306b74617aa2c32f19cc0c2fa9009177763,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-145997,Uid:6b5723c50c3a0cc61b3bdf541867db4a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1761596089027294286,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5723c50c3a0cc61b3bdf541867db4a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6b5723c50c3a0cc61b3bdf541867db4a,kubernetes.io/config.seen: 2025-10-27T20:13:55.577221997Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f84f54fa556b6b50dd5b89b4133b7bedca44d5610a7711e02c0acb940485b6d7,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-145997,Uid:b7978fd7c60ce7274985091dc8bc428f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1761596089025965422,Lab
els:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7978fd7c60ce7274985091dc8bc428f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b7978fd7c60ce7274985091dc8bc428f,kubernetes.io/config.seen: 2025-10-27T20:13:55.577222801Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=e59bba27-6a8e-4776-a968-8b63f4bf155b name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 27 20:15:17 pause-145997 crio[2838]: time="2025-10-27 20:15:17.476439520Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=365a8a45-d0e8-42f4-a807-7217be9935ca name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 20:15:17 pause-145997 crio[2838]: time="2025-10-27 20:15:17.476694688Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=365a8a45-d0e8-42f4-a807-7217be9935ca name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 20:15:17 pause-145997 crio[2838]: time="2025-10-27 20:15:17.476977502Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf534bdbf58dd599671a66c54a6c649259aaa51f304334c708aa8dc1e84e82f2,PodSandboxId:70925c8a68ab964f50fdd8afc1abb18e86c67f877a8115d17fc6bd7a8928425e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761596101235915519,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4qs4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92c6d26c-1ff4-4a98-b0f6-963244a8a802,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69d91656fd6e4394466a5c7ea98154b54ea6285173eb592a88a21d74ebb5c104,PodSandboxId:44d0a969409735462de9a3a698817537af734d07a550bceeb1ad0c9c2b4ce8b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761596100990355573,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2vzps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01869f53-a897-4a1a-b5be-ceafca2e105b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17aaf053be8fe790f733518dc047995ba09979759f285b06bdb9aa50da6d1c4a,PodSandboxId:f84f54fa556b6b50dd5b89b4133b7bedca44d5610a7711e02c0acb940485b6d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761596096111718267,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7978fd7c60ce7274985091dc8bc428f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32fdda72d1f3e3c77707054ccc9df8291686fb4736e4d3bc4f25ffb6249f5846,PodSandboxId:cbc1df63a2d7f4ecb6b0b59053743306b74617aa2c32f19cc0c2fa9009177763,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c283373
88d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761596096139574279,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5723c50c3a0cc61b3bdf541867db4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c474d221a72dda544da83f00d27e1a654e604411e2a1387afc6d6e8126f660,PodSandboxId:59e2bb88f2cbfe809ed5c8593501ae20f9050e865ac5e39b1feacd694084a043,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761596096099941766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd75f8f8b3c018aefd65cc6e9837f750,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c1d044ae6575530689a8969a4f11c42ef57f8fc1f2a96ca5cbcab22ed998f39,PodSandboxId:1d26ac923453bf72cc9bdc1f3160bc67c99d6f2f7743bc84097396144af1eac7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Imag
e:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761596090258335964,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dbd60e6128b2bcb6ef173322a403223,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=365a8a45-d0e8-42f4-a807-7217be9935ca name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 20:15:17 pause-145997 crio[2838]: time="2025-10-27 20:15:17.494612635Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=743108cf-566a-4e97-ab34-9765b24385a2 name=/runtime.v1.RuntimeService/Version
	Oct 27 20:15:17 pause-145997 crio[2838]: time="2025-10-27 20:15:17.494671516Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=743108cf-566a-4e97-ab34-9765b24385a2 name=/runtime.v1.RuntimeService/Version
	Oct 27 20:15:17 pause-145997 crio[2838]: time="2025-10-27 20:15:17.496139051Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0d487123-c232-4c68-a069-a9fad1ccf14a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 20:15:17 pause-145997 crio[2838]: time="2025-10-27 20:15:17.496599437Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761596117496568660,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0d487123-c232-4c68-a069-a9fad1ccf14a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 20:15:17 pause-145997 crio[2838]: time="2025-10-27 20:15:17.497360922Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c3b4e72d-0b5a-4d5f-953f-4e4d4451e1e7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 20:15:17 pause-145997 crio[2838]: time="2025-10-27 20:15:17.497481653Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c3b4e72d-0b5a-4d5f-953f-4e4d4451e1e7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 20:15:17 pause-145997 crio[2838]: time="2025-10-27 20:15:17.497916655Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf534bdbf58dd599671a66c54a6c649259aaa51f304334c708aa8dc1e84e82f2,PodSandboxId:70925c8a68ab964f50fdd8afc1abb18e86c67f877a8115d17fc6bd7a8928425e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761596101235915519,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4qs4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92c6d26c-1ff4-4a98-b0f6-963244a8a802,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69d91656fd6e4394466a5c7ea98154b54ea6285173eb592a88a21d74ebb5c104,PodSandboxId:44d0a969409735462de9a3a698817537af734d07a550bceeb1ad0c9c2b4ce8b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761596100990355573,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2vzps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01869f53-a897-4a1a-b5be-ceafca2e105b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17aaf053be8fe790f733518dc047995ba09979759f285b06bdb9aa50da6d1c4a,PodSandboxId:f84f54fa556b6b50dd5b89b4133b7bedca44d5610a7711e02c0acb940485b6d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761596096111718267,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7978fd7c60ce7274985091dc8bc428f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32fdda72d1f3e3c77707054ccc9df8291686fb4736e4d3bc4f25ffb6249f5846,PodSandboxId:cbc1df63a2d7f4ecb6b0b59053743306b74617aa2c32f19cc0c2fa9009177763,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c283373
88d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761596096139574279,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5723c50c3a0cc61b3bdf541867db4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c474d221a72dda544da83f00d27e1a654e604411e2a1387afc6d6e8126f660,PodSandboxId:59e2bb88f2cbfe809ed5c8593501ae20f9050e865ac5e39b1feacd694084a043,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761596096099941766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd75f8f8b3c018aefd65cc6e9837f750,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c1d044ae6575530689a8969a4f11c42ef57f8fc1f2a96ca5cbcab22ed998f39,PodSandboxId:1d26ac923453bf72cc9bdc1f3160bc67c99d6f2f7743bc84097396144af1eac7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Imag
e:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761596090258335964,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dbd60e6128b2bcb6ef173322a403223,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:921e5f1ba1c1b8110a1f33cd1ada6314ca2ed64bf4b8c5d850a6c716894257ca,PodSandboxId:59e2bb88f2cbfe809ed5c8593501ae20f9050e865ac5e3
9b1feacd694084a043,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761596089661596786,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd75f8f8b3c018aefd65cc6e9837f750,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a139373de7fd05501114a4
995b989c4548d3ce9876179050be3b9f77ea24633a,PodSandboxId:cbc1df63a2d7f4ecb6b0b59053743306b74617aa2c32f19cc0c2fa9009177763,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761596089654549201,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5723c50c3a0cc61b3bdf541867db4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fc82ec50104917bca670b2e4ac750cd14acc60d37d1675ff8b4e216c8d44a9a,PodSandboxId:f84f54fa556b6b50dd5b89b4133b7bedca44d5610a7711e02c0acb940485b6d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761596089644221791,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7978fd7c60ce7274985091dc8bc428f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59a7505d232443bca3d02905fc8c66a4ee8f6a5d5097c42c474de3698a8cd5bf,PodSandboxId:c34606fc8cf74781d081c72a7a145eb9e0aab5c862d800e0db485829fcd71ab1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761596042787123308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4qs4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92c6d26c-1ff4-4a98-b0f6-963244a8a802,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns
\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4fe26d13f640798c6ea20dd6f7972ec4eae0c5d48d146fd3606e571a939563c,PodSandboxId:76b8dcf42698b5242d0cc43c49cd27e14b6c3605b4d34a8fca47529942f93f84,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:17615
96041297297523,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2vzps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01869f53-a897-4a1a-b5be-ceafca2e105b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f5d1271f2e4f30bdedf83cb576646bc27f6c3a809fec4473a753ba2f1601af1,PodSandboxId:03c81f73b76a7cab6a80ca5ad4d3eb2d83dd2fb19745f7cb0c8f9ca5e1cba3fe,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761596029670447348,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dbd60e6128b2bcb6ef173322a403223,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c3b4e72d-0b5a-4d5f-953f-4e4d4451e1e7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 20:15:17 pause-145997 crio[2838]: time="2025-10-27 20:15:17.561758223Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=37f4b56f-7c29-48e8-925a-273d27bc7948 name=/runtime.v1.RuntimeService/Version
	Oct 27 20:15:17 pause-145997 crio[2838]: time="2025-10-27 20:15:17.561884270Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=37f4b56f-7c29-48e8-925a-273d27bc7948 name=/runtime.v1.RuntimeService/Version
	Oct 27 20:15:17 pause-145997 crio[2838]: time="2025-10-27 20:15:17.564293949Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0a07d803-943a-4d13-996f-0d3b762af9c4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 20:15:17 pause-145997 crio[2838]: time="2025-10-27 20:15:17.565071406Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761596117565024026,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0a07d803-943a-4d13-996f-0d3b762af9c4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 20:15:17 pause-145997 crio[2838]: time="2025-10-27 20:15:17.566017234Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=290a55d7-7f5f-42b0-b71e-fee56e6b06c6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 20:15:17 pause-145997 crio[2838]: time="2025-10-27 20:15:17.566096589Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=290a55d7-7f5f-42b0-b71e-fee56e6b06c6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 20:15:17 pause-145997 crio[2838]: time="2025-10-27 20:15:17.566444617Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf534bdbf58dd599671a66c54a6c649259aaa51f304334c708aa8dc1e84e82f2,PodSandboxId:70925c8a68ab964f50fdd8afc1abb18e86c67f877a8115d17fc6bd7a8928425e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761596101235915519,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4qs4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92c6d26c-1ff4-4a98-b0f6-963244a8a802,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69d91656fd6e4394466a5c7ea98154b54ea6285173eb592a88a21d74ebb5c104,PodSandboxId:44d0a969409735462de9a3a698817537af734d07a550bceeb1ad0c9c2b4ce8b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761596100990355573,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2vzps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01869f53-a897-4a1a-b5be-ceafca2e105b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17aaf053be8fe790f733518dc047995ba09979759f285b06bdb9aa50da6d1c4a,PodSandboxId:f84f54fa556b6b50dd5b89b4133b7bedca44d5610a7711e02c0acb940485b6d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761596096111718267,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7978fd7c60ce7274985091dc8bc428f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32fdda72d1f3e3c77707054ccc9df8291686fb4736e4d3bc4f25ffb6249f5846,PodSandboxId:cbc1df63a2d7f4ecb6b0b59053743306b74617aa2c32f19cc0c2fa9009177763,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c283373
88d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761596096139574279,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5723c50c3a0cc61b3bdf541867db4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c474d221a72dda544da83f00d27e1a654e604411e2a1387afc6d6e8126f660,PodSandboxId:59e2bb88f2cbfe809ed5c8593501ae20f9050e865ac5e39b1feacd694084a043,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761596096099941766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd75f8f8b3c018aefd65cc6e9837f750,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c1d044ae6575530689a8969a4f11c42ef57f8fc1f2a96ca5cbcab22ed998f39,PodSandboxId:1d26ac923453bf72cc9bdc1f3160bc67c99d6f2f7743bc84097396144af1eac7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Imag
e:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761596090258335964,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dbd60e6128b2bcb6ef173322a403223,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:921e5f1ba1c1b8110a1f33cd1ada6314ca2ed64bf4b8c5d850a6c716894257ca,PodSandboxId:59e2bb88f2cbfe809ed5c8593501ae20f9050e865ac5e3
9b1feacd694084a043,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761596089661596786,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd75f8f8b3c018aefd65cc6e9837f750,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a139373de7fd05501114a4
995b989c4548d3ce9876179050be3b9f77ea24633a,PodSandboxId:cbc1df63a2d7f4ecb6b0b59053743306b74617aa2c32f19cc0c2fa9009177763,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761596089654549201,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5723c50c3a0cc61b3bdf541867db4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fc82ec50104917bca670b2e4ac750cd14acc60d37d1675ff8b4e216c8d44a9a,PodSandboxId:f84f54fa556b6b50dd5b89b4133b7bedca44d5610a7711e02c0acb940485b6d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761596089644221791,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7978fd7c60ce7274985091dc8bc428f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59a7505d232443bca3d02905fc8c66a4ee8f6a5d5097c42c474de3698a8cd5bf,PodSandboxId:c34606fc8cf74781d081c72a7a145eb9e0aab5c862d800e0db485829fcd71ab1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761596042787123308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4qs4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92c6d26c-1ff4-4a98-b0f6-963244a8a802,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns
\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4fe26d13f640798c6ea20dd6f7972ec4eae0c5d48d146fd3606e571a939563c,PodSandboxId:76b8dcf42698b5242d0cc43c49cd27e14b6c3605b4d34a8fca47529942f93f84,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:17615
96041297297523,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2vzps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01869f53-a897-4a1a-b5be-ceafca2e105b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f5d1271f2e4f30bdedf83cb576646bc27f6c3a809fec4473a753ba2f1601af1,PodSandboxId:03c81f73b76a7cab6a80ca5ad4d3eb2d83dd2fb19745f7cb0c8f9ca5e1cba3fe,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761596029670447348,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dbd60e6128b2bcb6ef173322a403223,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=290a55d7-7f5f-42b0-b71e-fee56e6b06c6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	bf534bdbf58dd       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   16 seconds ago       Running             coredns                   1                   70925c8a68ab9       coredns-66bc5c9577-4qs4m
	69d91656fd6e4       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   16 seconds ago       Running             kube-proxy                1                   44d0a96940973       kube-proxy-2vzps
	32fdda72d1f3e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   21 seconds ago       Running             kube-controller-manager   2                   cbc1df63a2d7f       kube-controller-manager-pause-145997
	17aaf053be8fe       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   21 seconds ago       Running             kube-scheduler            2                   f84f54fa556b6       kube-scheduler-pause-145997
	09c474d221a72       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   21 seconds ago       Running             kube-apiserver            2                   59e2bb88f2cbf       kube-apiserver-pause-145997
	7c1d044ae6575       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   27 seconds ago       Running             etcd                      1                   1d26ac923453b       etcd-pause-145997
	921e5f1ba1c1b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   28 seconds ago       Exited              kube-apiserver            1                   59e2bb88f2cbf       kube-apiserver-pause-145997
	a139373de7fd0       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   28 seconds ago       Exited              kube-controller-manager   1                   cbc1df63a2d7f       kube-controller-manager-pause-145997
	4fc82ec501049       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   28 seconds ago       Exited              kube-scheduler            1                   f84f54fa556b6       kube-scheduler-pause-145997
	59a7505d23244       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   0                   c34606fc8cf74       coredns-66bc5c9577-4qs4m
	f4fe26d13f640       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   About a minute ago   Exited              kube-proxy                0                   76b8dcf42698b       kube-proxy-2vzps
	8f5d1271f2e4f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   About a minute ago   Exited              etcd                      0                   03c81f73b76a7       etcd-pause-145997
	
	
	==> coredns [59a7505d232443bca3d02905fc8c66a4ee8f6a5d5097c42c474de3698a8cd5bf] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bf534bdbf58dd599671a66c54a6c649259aaa51f304334c708aa8dc1e84e82f2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36125 - 44107 "HINFO IN 5402909375033114884.5487682932040252686. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018346463s
	
	
	==> describe nodes <==
	Name:               pause-145997
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-145997
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=pause-145997
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T20_13_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 20:13:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-145997
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 20:15:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 20:14:59 +0000   Mon, 27 Oct 2025 20:13:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 20:14:59 +0000   Mon, 27 Oct 2025 20:13:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 20:14:59 +0000   Mon, 27 Oct 2025 20:13:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 20:14:59 +0000   Mon, 27 Oct 2025 20:13:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.115
	  Hostname:    pause-145997
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 259375e4d11443748d324a099e09148b
	  System UUID:                259375e4-d114-4374-8d32-4a099e09148b
	  Boot ID:                    7aac5f79-0ec7-4f01-89c8-40c006ad9883
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-4qs4m                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     76s
	  kube-system                 etcd-pause-145997                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         82s
	  kube-system                 kube-apiserver-pause-145997             250m (12%)    0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-controller-manager-pause-145997    200m (10%)    0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-proxy-2vzps                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 kube-scheduler-pause-145997             100m (5%)     0 (0%)      0 (0%)           0 (0%)         82s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 75s                kube-proxy       
	  Normal  Starting                 16s                kube-proxy       
	  Normal  NodeHasSufficientPID     89s (x7 over 89s)  kubelet          Node pause-145997 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    89s (x8 over 89s)  kubelet          Node pause-145997 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  89s (x8 over 89s)  kubelet          Node pause-145997 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  89s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 82s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  82s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  82s                kubelet          Node pause-145997 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    82s                kubelet          Node pause-145997 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     82s                kubelet          Node pause-145997 status is now: NodeHasSufficientPID
	  Normal  NodeReady                81s                kubelet          Node pause-145997 status is now: NodeReady
	  Normal  RegisteredNode           78s                node-controller  Node pause-145997 event: Registered Node pause-145997 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node pause-145997 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node pause-145997 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node pause-145997 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15s                node-controller  Node pause-145997 event: Registered Node pause-145997 in Controller
	
	
	==> dmesg <==
	[Oct27 20:13] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001520] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005029] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.170941] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000022] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000006] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.092663] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.136411] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.119830] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.159885] kauditd_printk_skb: 171 callbacks suppressed
	[Oct27 20:14] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.123963] kauditd_printk_skb: 228 callbacks suppressed
	[  +0.114818] kauditd_printk_skb: 38 callbacks suppressed
	[  +4.487528] kauditd_printk_skb: 216 callbacks suppressed
	[Oct27 20:15] kauditd_printk_skb: 85 callbacks suppressed
	
	
	==> etcd [7c1d044ae6575530689a8969a4f11c42ef57f8fc1f2a96ca5cbcab22ed998f39] <==
	{"level":"warn","ts":"2025-10-27T20:14:58.684111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:14:58.694163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:14:58.704668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:14:58.714244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:14:58.725245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:14:58.732635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:14:58.744398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:14:58.750965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:14:58.769451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:14:58.790022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:14:58.797124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:14:58.805570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:14:58.814416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:14:58.870982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52712","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-27T20:15:15.708131Z","caller":"traceutil/trace.go:172","msg":"trace[26727978] linearizableReadLoop","detail":"{readStateIndex:574; appliedIndex:574; }","duration":"108.676348ms","start":"2025-10-27T20:15:15.599433Z","end":"2025-10-27T20:15:15.708110Z","steps":["trace[26727978] 'read index received'  (duration: 108.670403ms)","trace[26727978] 'applied index is now lower than readState.Index'  (duration: 4.974µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T20:15:15.708303Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.846875ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T20:15:15.708351Z","caller":"traceutil/trace.go:172","msg":"trace[610763209] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:530; }","duration":"108.913721ms","start":"2025-10-27T20:15:15.599427Z","end":"2025-10-27T20:15:15.708341Z","steps":["trace[610763209] 'agreement among raft nodes before linearized reading'  (duration: 108.818491ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T20:15:15.709324Z","caller":"traceutil/trace.go:172","msg":"trace[609209266] transaction","detail":"{read_only:false; response_revision:531; number_of_response:1; }","duration":"216.56273ms","start":"2025-10-27T20:15:15.492747Z","end":"2025-10-27T20:15:15.709310Z","steps":["trace[609209266] 'process raft request'  (duration: 215.687998ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T20:15:16.086870Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"239.166402ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15752072140304252444 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-145997\" mod_revision:531 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-145997\" value_size:4777 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-145997\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-27T20:15:16.087029Z","caller":"traceutil/trace.go:172","msg":"trace[1784464789] linearizableReadLoop","detail":"{readStateIndex:576; appliedIndex:575; }","duration":"151.82843ms","start":"2025-10-27T20:15:15.935187Z","end":"2025-10-27T20:15:16.087015Z","steps":["trace[1784464789] 'read index received'  (duration: 92.522µs)","trace[1784464789] 'applied index is now lower than readState.Index'  (duration: 151.734458ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T20:15:16.087309Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"152.132549ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-pause-145997\" limit:1 ","response":"range_response_count:1 size:4854"}
	{"level":"info","ts":"2025-10-27T20:15:16.087405Z","caller":"traceutil/trace.go:172","msg":"trace[988378114] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-pause-145997; range_end:; response_count:1; response_revision:532; }","duration":"152.22942ms","start":"2025-10-27T20:15:15.935155Z","end":"2025-10-27T20:15:16.087384Z","steps":["trace[988378114] 'agreement among raft nodes before linearized reading'  (duration: 151.944266ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T20:15:16.087565Z","caller":"traceutil/trace.go:172","msg":"trace[895888739] transaction","detail":"{read_only:false; response_revision:532; number_of_response:1; }","duration":"364.16099ms","start":"2025-10-27T20:15:15.723390Z","end":"2025-10-27T20:15:16.087551Z","steps":["trace[895888739] 'process raft request'  (duration: 123.587981ms)","trace[895888739] 'compare'  (duration: 238.964328ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T20:15:16.087670Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-27T20:15:15.723361Z","time spent":"364.255569ms","remote":"127.0.0.1:51950","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4839,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-145997\" mod_revision:531 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-145997\" value_size:4777 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-145997\" > >"}
	{"level":"warn","ts":"2025-10-27T20:15:16.449489Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.803073ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15752072140304252447 > lease_revoke:<id:5a9a9a274f297cf6>","response":"size:28"}
	
	
	==> etcd [8f5d1271f2e4f30bdedf83cb576646bc27f6c3a809fec4473a753ba2f1601af1] <==
	{"level":"warn","ts":"2025-10-27T20:14:02.836378Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"283.968443ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:4413"}
	{"level":"info","ts":"2025-10-27T20:14:02.836390Z","caller":"traceutil/trace.go:172","msg":"trace[1115120290] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:390; }","duration":"283.983789ms","start":"2025-10-27T20:14:02.552403Z","end":"2025-10-27T20:14:02.836387Z","steps":["trace[1115120290] 'agreement among raft nodes before linearized reading'  (duration: 283.920486ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T20:14:02.836453Z","caller":"traceutil/trace.go:172","msg":"trace[1786370370] transaction","detail":"{read_only:false; response_revision:389; number_of_response:1; }","duration":"398.731546ms","start":"2025-10-27T20:14:02.437717Z","end":"2025-10-27T20:14:02.836449Z","steps":["trace[1786370370] 'process raft request'  (duration: 398.255802ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T20:14:02.836485Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-27T20:14:02.437703Z","time spent":"398.761666ms","remote":"127.0.0.1:39392","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":788,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns-66bc5c9577-4qs4m.187272446ada171d\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-66bc5c9577-4qs4m.187272446ada171d\" value_size:700 lease:6528700103433962165 >> failure:<>"}
	{"level":"warn","ts":"2025-10-27T20:14:02.843612Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"402.246283ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T20:14:02.843831Z","caller":"traceutil/trace.go:172","msg":"trace[655704997] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:387; }","duration":"402.446907ms","start":"2025-10-27T20:14:02.441326Z","end":"2025-10-27T20:14:02.843773Z","steps":["trace[655704997] 'agreement among raft nodes before linearized reading'  (duration: 117.049555ms)","trace[655704997] 'range keys from in-memory index tree'  (duration: 276.829138ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T20:14:02.844549Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-27T20:14:02.441314Z","time spent":"403.21472ms","remote":"127.0.0.1:39566","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-10-27T20:14:39.717554Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-27T20:14:39.717706Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-145997","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.115:2380"],"advertise-client-urls":["https://192.168.72.115:2379"]}
	{"level":"error","ts":"2025-10-27T20:14:39.723203Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-27T20:14:39.802763Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-27T20:14:39.804290Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T20:14:39.804342Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ef80b93f13cda9a","current-leader-member-id":"ef80b93f13cda9a"}
	{"level":"info","ts":"2025-10-27T20:14:39.804413Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-27T20:14:39.804428Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-27T20:14:39.804432Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-27T20:14:39.804500Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-27T20:14:39.804514Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-27T20:14:39.804551Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.72.115:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-27T20:14:39.804558Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.72.115:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-27T20:14:39.804563Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.72.115:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T20:14:39.807550Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.72.115:2380"}
	{"level":"error","ts":"2025-10-27T20:14:39.807610Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.72.115:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T20:14:39.807728Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.72.115:2380"}
	{"level":"info","ts":"2025-10-27T20:14:39.807752Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-145997","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.115:2380"],"advertise-client-urls":["https://192.168.72.115:2379"]}
	
	
	==> kernel <==
	 20:15:18 up 1 min,  0 users,  load average: 0.88, 0.41, 0.15
	Linux pause-145997 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [09c474d221a72dda544da83f00d27e1a654e604411e2a1387afc6d6e8126f660] <==
	I1027 20:14:59.668220       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1027 20:14:59.668256       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1027 20:14:59.677118       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1027 20:14:59.677549       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1027 20:14:59.677655       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 20:14:59.703352       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1027 20:14:59.704254       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1027 20:14:59.704861       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 20:14:59.705093       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1027 20:14:59.705111       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1027 20:14:59.707003       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 20:14:59.735059       1 aggregator.go:171] initial CRD sync complete...
	I1027 20:14:59.735097       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 20:14:59.735182       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 20:14:59.735209       1 cache.go:39] Caches are synced for autoregister controller
	I1027 20:15:00.475631       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 20:15:00.536486       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1027 20:15:01.067270       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.72.115]
	I1027 20:15:01.073324       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 20:15:01.098707       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 20:15:01.856733       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 20:15:01.933763       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1027 20:15:01.977034       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 20:15:01.987321       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 20:15:02.965949       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-apiserver [921e5f1ba1c1b8110a1f33cd1ada6314ca2ed64bf4b8c5d850a6c716894257ca] <==
	I1027 20:14:53.197147       1 controller.go:120] Shutting down OpenAPI V3 controller
	I1027 20:14:53.197164       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I1027 20:14:53.197572       1 controller.go:132] Ending legacy_token_tracking_controller
	I1027 20:14:53.197640       1 controller.go:133] Shutting down legacy_token_tracking_controller
	I1027 20:14:53.197661       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	E1027 20:14:53.197714       1 cache.go:35] "Unhandled Error" err="Unable to sync caches for RemoteAvailability controller" logger="UnhandledError"
	E1027 20:14:53.197758       1 cache.go:35] "Unhandled Error" err="Unable to sync caches for LocalAvailability controller" logger="UnhandledError"
	I1027 20:14:53.197857       1 customresource_discovery_controller.go:332] Shutting down DiscoveryController
	E1027 20:14:53.197878       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="cluster_authentication_trust_controller"
	E1027 20:14:53.197921       1 cache.go:35] "Unhandled Error" err="Unable to sync caches for APIServiceRegistrationController controller" logger="UnhandledError"
	F1027 20:14:53.198013       1 hooks.go:204] PostStartHook "priority-and-fairness-config-producer" failed: APF bootstrap ensurer timed out waiting for cache sync
	I1027 20:14:53.301911       1 local_available_controller.go:164] Shutting down LocalAvailability controller
	I1027 20:14:53.302025       1 cluster_authentication_trust_controller.go:467] Shutting down cluster_authentication_trust_controller controller
	I1027 20:14:53.302042       1 apiservice_controller.go:104] Shutting down APIServiceRegistrationController
	I1027 20:14:53.302122       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 20:14:53.302135       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1027 20:14:53.302217       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1027 20:14:53.302427       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I1027 20:14:53.302498       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1027 20:14:53.302545       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I1027 20:14:53.302585       1 object_count_tracker.go:141] "StorageObjectCountTracker pruner is exiting"
	I1027 20:14:53.302648       1 controller.go:84] Shutting down OpenAPI AggregationController
	I1027 20:14:53.196705       1 establishing_controller.go:92] Shutting down EstablishingController
	I1027 20:14:53.197155       1 autoregister_controller.go:168] Shutting down autoregister controller
	I1027 20:14:53.198049       1 remote_available_controller.go:433] Shutting down RemoteAvailability controller
	
	
	==> kube-controller-manager [32fdda72d1f3e3c77707054ccc9df8291686fb4736e4d3bc4f25ffb6249f5846] <==
	I1027 20:15:02.967229       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 20:15:02.972589       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1027 20:15:02.972635       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 20:15:02.981134       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 20:15:02.983553       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 20:15:02.988939       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 20:15:02.990214       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 20:15:02.990567       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 20:15:02.990595       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 20:15:02.990601       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 20:15:02.997989       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1027 20:15:02.998135       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 20:15:03.003773       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1027 20:15:03.007507       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1027 20:15:03.007571       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 20:15:03.007685       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1027 20:15:03.007774       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1027 20:15:03.008842       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1027 20:15:03.008888       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 20:15:03.011377       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 20:15:03.011540       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 20:15:03.011656       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 20:15:03.011673       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 20:15:03.015201       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 20:15:03.017447       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	
	
	==> kube-controller-manager [a139373de7fd05501114a4995b989c4548d3ce9876179050be3b9f77ea24633a] <==
	I1027 20:14:51.324084       1 serving.go:386] Generated self-signed cert in-memory
	I1027 20:14:51.533512       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1027 20:14:51.533559       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 20:14:51.536336       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1027 20:14:51.537669       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1027 20:14:51.537886       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 20:14:51.537986       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-proxy [69d91656fd6e4394466a5c7ea98154b54ea6285173eb592a88a21d74ebb5c104] <==
	I1027 20:15:01.306197       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 20:15:01.407001       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 20:15:01.407184       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.115"]
	E1027 20:15:01.407261       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 20:15:01.455561       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1027 20:15:01.455658       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1027 20:15:01.455688       1 server_linux.go:132] "Using iptables Proxier"
	I1027 20:15:01.468980       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 20:15:01.469357       1 server.go:527] "Version info" version="v1.34.1"
	I1027 20:15:01.469647       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 20:15:01.484023       1 config.go:200] "Starting service config controller"
	I1027 20:15:01.484210       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 20:15:01.484244       1 config.go:309] "Starting node config controller"
	I1027 20:15:01.485760       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 20:15:01.485955       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 20:15:01.484733       1 config.go:106] "Starting endpoint slice config controller"
	I1027 20:15:01.486216       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 20:15:01.484744       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 20:15:01.486945       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 20:15:01.585906       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 20:15:01.587325       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 20:15:01.588609       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [f4fe26d13f640798c6ea20dd6f7972ec4eae0c5d48d146fd3606e571a939563c] <==
	I1027 20:14:01.682871       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 20:14:01.786767       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 20:14:01.786856       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.115"]
	E1027 20:14:01.786965       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 20:14:02.011859       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1027 20:14:02.012943       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1027 20:14:02.013090       1 server_linux.go:132] "Using iptables Proxier"
	I1027 20:14:02.119684       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 20:14:02.120478       1 server.go:527] "Version info" version="v1.34.1"
	I1027 20:14:02.120852       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 20:14:02.128740       1 config.go:200] "Starting service config controller"
	I1027 20:14:02.128950       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 20:14:02.129071       1 config.go:106] "Starting endpoint slice config controller"
	I1027 20:14:02.129094       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 20:14:02.129297       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 20:14:02.129303       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 20:14:02.135563       1 config.go:309] "Starting node config controller"
	I1027 20:14:02.135595       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 20:14:02.135602       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 20:14:02.229830       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 20:14:02.229922       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 20:14:02.230131       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [17aaf053be8fe790f733518dc047995ba09979759f285b06bdb9aa50da6d1c4a] <==
	I1027 20:14:57.369121       1 serving.go:386] Generated self-signed cert in-memory
	W1027 20:14:59.495475       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 20:14:59.496209       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 20:14:59.496272       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 20:14:59.496280       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 20:14:59.590446       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 20:14:59.591878       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 20:14:59.600597       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:14:59.600722       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:14:59.601295       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 20:14:59.601314       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 20:14:59.705197       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [4fc82ec50104917bca670b2e4ac750cd14acc60d37d1675ff8b4e216c8d44a9a] <==
	E1027 20:14:53.144951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 20:14:53.145080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 20:14:53.146821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 20:14:53.147747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 20:14:53.148679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 20:14:53.150463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 20:14:53.150574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 20:14:53.150679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 20:14:53.147684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 20:14:53.151094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1027 20:14:53.151201       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 20:14:53.151276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 20:14:53.151389       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 20:14:53.151487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 20:14:53.151551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 20:14:53.151719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 20:14:53.154557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 20:14:53.609515       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	I1027 20:14:53.609920       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1027 20:14:53.609964       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1027 20:14:53.610029       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	E1027 20:14:53.609916       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:14:53.610167       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:14:53.610669       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1027 20:14:53.610750       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 27 20:14:57 pause-145997 kubelet[3496]: E1027 20:14:57.621105    3496 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-145997\" not found" node="pause-145997"
	Oct 27 20:14:57 pause-145997 kubelet[3496]: E1027 20:14:57.623632    3496 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-145997\" not found" node="pause-145997"
	Oct 27 20:14:58 pause-145997 kubelet[3496]: E1027 20:14:58.623268    3496 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-145997\" not found" node="pause-145997"
	Oct 27 20:14:58 pause-145997 kubelet[3496]: E1027 20:14:58.626580    3496 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-145997\" not found" node="pause-145997"
	Oct 27 20:14:59 pause-145997 kubelet[3496]: I1027 20:14:59.744059    3496 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-145997"
	Oct 27 20:14:59 pause-145997 kubelet[3496]: I1027 20:14:59.748208    3496 kubelet_node_status.go:124] "Node was previously registered" node="pause-145997"
	Oct 27 20:14:59 pause-145997 kubelet[3496]: I1027 20:14:59.748292    3496 kubelet_node_status.go:78] "Successfully registered node" node="pause-145997"
	Oct 27 20:14:59 pause-145997 kubelet[3496]: I1027 20:14:59.748326    3496 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 27 20:14:59 pause-145997 kubelet[3496]: I1027 20:14:59.751140    3496 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 27 20:14:59 pause-145997 kubelet[3496]: E1027 20:14:59.792423    3496 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-145997\" already exists" pod="kube-system/etcd-pause-145997"
	Oct 27 20:14:59 pause-145997 kubelet[3496]: I1027 20:14:59.792471    3496 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-145997"
	Oct 27 20:14:59 pause-145997 kubelet[3496]: E1027 20:14:59.808239    3496 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-145997\" already exists" pod="kube-system/kube-apiserver-pause-145997"
	Oct 27 20:14:59 pause-145997 kubelet[3496]: I1027 20:14:59.808855    3496 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-145997"
	Oct 27 20:14:59 pause-145997 kubelet[3496]: E1027 20:14:59.822264    3496 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-145997\" already exists" pod="kube-system/kube-controller-manager-pause-145997"
	Oct 27 20:14:59 pause-145997 kubelet[3496]: I1027 20:14:59.822298    3496 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-145997"
	Oct 27 20:14:59 pause-145997 kubelet[3496]: E1027 20:14:59.843914    3496 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-145997\" already exists" pod="kube-system/kube-scheduler-pause-145997"
	Oct 27 20:15:00 pause-145997 kubelet[3496]: I1027 20:15:00.384398    3496 apiserver.go:52] "Watching apiserver"
	Oct 27 20:15:00 pause-145997 kubelet[3496]: I1027 20:15:00.444106    3496 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 27 20:15:00 pause-145997 kubelet[3496]: I1027 20:15:00.531936    3496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01869f53-a897-4a1a-b5be-ceafca2e105b-lib-modules\") pod \"kube-proxy-2vzps\" (UID: \"01869f53-a897-4a1a-b5be-ceafca2e105b\") " pod="kube-system/kube-proxy-2vzps"
	Oct 27 20:15:00 pause-145997 kubelet[3496]: I1027 20:15:00.531966    3496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01869f53-a897-4a1a-b5be-ceafca2e105b-xtables-lock\") pod \"kube-proxy-2vzps\" (UID: \"01869f53-a897-4a1a-b5be-ceafca2e105b\") " pod="kube-system/kube-proxy-2vzps"
	Oct 27 20:15:04 pause-145997 kubelet[3496]: I1027 20:15:04.614462    3496 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 27 20:15:05 pause-145997 kubelet[3496]: E1027 20:15:05.583455    3496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761596105582839077  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 27 20:15:05 pause-145997 kubelet[3496]: E1027 20:15:05.583496    3496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761596105582839077  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 27 20:15:15 pause-145997 kubelet[3496]: E1027 20:15:15.585557    3496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761596115585147376  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 27 20:15:15 pause-145997 kubelet[3496]: E1027 20:15:15.585582    3496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761596115585147376  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-145997 -n pause-145997
helpers_test.go:269: (dbg) Run:  kubectl --context pause-145997 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-145997 -n pause-145997
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-145997 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-145997 logs -n 25: (1.834498966s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                  ARGS                                                                  │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-764820 sudo cat /etc/docker/daemon.json                                                                                      │ cilium-764820             │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │                     │
	│ ssh     │ -p cilium-764820 sudo docker system info                                                                                               │ cilium-764820             │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │                     │
	│ ssh     │ -p cilium-764820 sudo systemctl status cri-docker --all --full --no-pager                                                              │ cilium-764820             │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │                     │
	│ ssh     │ -p cilium-764820 sudo systemctl cat cri-docker --no-pager                                                                              │ cilium-764820             │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │                     │
	│ ssh     │ -p cilium-764820 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                         │ cilium-764820             │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │                     │
	│ ssh     │ -p cilium-764820 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                   │ cilium-764820             │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │                     │
	│ ssh     │ -p cilium-764820 sudo cri-dockerd --version                                                                                            │ cilium-764820             │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │                     │
	│ ssh     │ -p cilium-764820 sudo systemctl status containerd --all --full --no-pager                                                              │ cilium-764820             │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │                     │
	│ ssh     │ -p cilium-764820 sudo systemctl cat containerd --no-pager                                                                              │ cilium-764820             │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │                     │
	│ ssh     │ -p cilium-764820 sudo cat /lib/systemd/system/containerd.service                                                                       │ cilium-764820             │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │                     │
	│ ssh     │ -p cilium-764820 sudo cat /etc/containerd/config.toml                                                                                  │ cilium-764820             │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │                     │
	│ ssh     │ -p cilium-764820 sudo containerd config dump                                                                                           │ cilium-764820             │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │                     │
	│ ssh     │ -p cilium-764820 sudo systemctl status crio --all --full --no-pager                                                                    │ cilium-764820             │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │                     │
	│ ssh     │ -p cilium-764820 sudo systemctl cat crio --no-pager                                                                                    │ cilium-764820             │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │                     │
	│ ssh     │ -p cilium-764820 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                          │ cilium-764820             │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │                     │
	│ ssh     │ -p cilium-764820 sudo crio config                                                                                                      │ cilium-764820             │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │                     │
	│ delete  │ -p cilium-764820                                                                                                                       │ cilium-764820             │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │ 27 Oct 25 20:13 UTC │
	│ start   │ -p guest-291039 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                │ guest-291039              │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │ 27 Oct 25 20:14 UTC │
	│ ssh     │ -p NoKubernetes-421237 sudo systemctl is-active --quiet service kubelet                                                                │ NoKubernetes-421237       │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │                     │
	│ delete  │ -p NoKubernetes-421237                                                                                                                 │ NoKubernetes-421237       │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │ 27 Oct 25 20:13 UTC │
	│ start   │ -p kubernetes-upgrade-176362 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio │ kubernetes-upgrade-176362 │ jenkins │ v1.37.0 │ 27 Oct 25 20:13 UTC │ 27 Oct 25 20:15 UTC │
	│ start   │ -p pause-145997 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                         │ pause-145997              │ jenkins │ v1.37.0 │ 27 Oct 25 20:14 UTC │ 27 Oct 25 20:15 UTC │
	│ start   │ -p stopped-upgrade-246578 --memory=3072 --vm-driver=kvm2  --container-runtime=crio                                                     │ stopped-upgrade-246578    │ jenkins │ v1.32.0 │ 27 Oct 25 20:14 UTC │                     │
	│ stop    │ -p kubernetes-upgrade-176362                                                                                                           │ kubernetes-upgrade-176362 │ jenkins │ v1.37.0 │ 27 Oct 25 20:15 UTC │ 27 Oct 25 20:15 UTC │
	│ start   │ -p kubernetes-upgrade-176362 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio │ kubernetes-upgrade-176362 │ jenkins │ v1.37.0 │ 27 Oct 25 20:15 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 20:15:04
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 20:15:04.526743   96708 out.go:360] Setting OutFile to fd 1 ...
	I1027 20:15:04.527012   96708 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:15:04.527022   96708 out.go:374] Setting ErrFile to fd 2...
	I1027 20:15:04.527026   96708 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:15:04.527242   96708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-58821/.minikube/bin
	I1027 20:15:04.527698   96708 out.go:368] Setting JSON to false
	I1027 20:15:04.528621   96708 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":10655,"bootTime":1761585450,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 20:15:04.528742   96708 start.go:141] virtualization: kvm guest
	I1027 20:15:04.531441   96708 out.go:179] * [kubernetes-upgrade-176362] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 20:15:04.532935   96708 notify.go:220] Checking for updates...
	I1027 20:15:04.533003   96708 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 20:15:04.534425   96708 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 20:15:04.535939   96708 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-58821/kubeconfig
	I1027 20:15:04.537373   96708 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-58821/.minikube
	I1027 20:15:04.539121   96708 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 20:15:04.540644   96708 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 20:15:01.342342   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:01.342974   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | no network interface addresses found for domain stopped-upgrade-246578 (source=lease)
	I1027 20:15:01.342998   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | trying to list again with source=arp
	I1027 20:15:01.343439   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | unable to find current IP address of domain stopped-upgrade-246578 in network mk-stopped-upgrade-246578 (interfaces detected: [])
	I1027 20:15:01.343464   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | I1027 20:15:01.343400   96461 retry.go:31] will retry after 4.465038765s: waiting for domain to come up
	I1027 20:15:04.542531   96708 config.go:182] Loaded profile config "kubernetes-upgrade-176362": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1027 20:15:04.542927   96708 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 20:15:04.581129   96708 out.go:179] * Using the kvm2 driver based on existing profile
	I1027 20:15:04.582537   96708 start.go:305] selected driver: kvm2
	I1027 20:15:04.582553   96708 start.go:925] validating driver "kvm2" against &{Name:kubernetes-upgrade-176362 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-176362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.42 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:15:04.582646   96708 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 20:15:04.583603   96708 cni.go:84] Creating CNI manager for ""
	I1027 20:15:04.583725   96708 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 20:15:04.583772   96708 start.go:349] cluster config:
	{Name:kubernetes-upgrade-176362 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-176362 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.42 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:15:04.583860   96708 iso.go:125] acquiring lock: {Name:mkbd04910579486806c142a651be4f82498c73ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 20:15:04.585435   96708 out.go:179] * Starting "kubernetes-upgrade-176362" primary control-plane node in "kubernetes-upgrade-176362" cluster
	I1027 20:15:02.037741   96119 addons.go:514] duration metric: took 3.232021ms for enable addons: enabled=[]
	I1027 20:15:02.037900   96119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:15:02.260341   96119 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 20:15:02.286364   96119 node_ready.go:35] waiting up to 6m0s for node "pause-145997" to be "Ready" ...
	I1027 20:15:02.289911   96119 node_ready.go:49] node "pause-145997" is "Ready"
	I1027 20:15:02.289957   96119 node_ready.go:38] duration metric: took 3.541549ms for node "pause-145997" to be "Ready" ...
	I1027 20:15:02.289977   96119 api_server.go:52] waiting for apiserver process to appear ...
	I1027 20:15:02.290053   96119 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 20:15:02.318255   96119 api_server.go:72] duration metric: took 283.786236ms to wait for apiserver process to appear ...
	I1027 20:15:02.318290   96119 api_server.go:88] waiting for apiserver healthz status ...
	I1027 20:15:02.318319   96119 api_server.go:253] Checking apiserver healthz at https://192.168.72.115:8443/healthz ...
	I1027 20:15:02.327745   96119 api_server.go:279] https://192.168.72.115:8443/healthz returned 200:
	ok
	I1027 20:15:02.329684   96119 api_server.go:141] control plane version: v1.34.1
	I1027 20:15:02.329708   96119 api_server.go:131] duration metric: took 11.408278ms to wait for apiserver health ...
	I1027 20:15:02.329720   96119 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 20:15:02.333940   96119 system_pods.go:59] 6 kube-system pods found
	I1027 20:15:02.333990   96119 system_pods.go:61] "coredns-66bc5c9577-4qs4m" [92c6d26c-1ff4-4a98-b0f6-963244a8a802] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:15:02.334001   96119 system_pods.go:61] "etcd-pause-145997" [08d8f65d-3056-48ee-9d16-a448d38ba1e1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 20:15:02.334014   96119 system_pods.go:61] "kube-apiserver-pause-145997" [da350639-ea13-402e-8856-3e304c9bc93a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 20:15:02.334029   96119 system_pods.go:61] "kube-controller-manager-pause-145997" [80f89714-5130-4df9-b9b0-bd9cc6bd5b2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 20:15:02.334054   96119 system_pods.go:61] "kube-proxy-2vzps" [01869f53-a897-4a1a-b5be-ceafca2e105b] Running
	I1027 20:15:02.334072   96119 system_pods.go:61] "kube-scheduler-pause-145997" [f0a7175d-b419-42c1-b485-cb5330d0ed07] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 20:15:02.334081   96119 system_pods.go:74] duration metric: took 4.353769ms to wait for pod list to return data ...
	I1027 20:15:02.334098   96119 default_sa.go:34] waiting for default service account to be created ...
	I1027 20:15:02.336913   96119 default_sa.go:45] found service account: "default"
	I1027 20:15:02.336934   96119 default_sa.go:55] duration metric: took 2.828181ms for default service account to be created ...
	I1027 20:15:02.336945   96119 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 20:15:02.340766   96119 system_pods.go:86] 6 kube-system pods found
	I1027 20:15:02.340798   96119 system_pods.go:89] "coredns-66bc5c9577-4qs4m" [92c6d26c-1ff4-4a98-b0f6-963244a8a802] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:15:02.340809   96119 system_pods.go:89] "etcd-pause-145997" [08d8f65d-3056-48ee-9d16-a448d38ba1e1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 20:15:02.340819   96119 system_pods.go:89] "kube-apiserver-pause-145997" [da350639-ea13-402e-8856-3e304c9bc93a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 20:15:02.340829   96119 system_pods.go:89] "kube-controller-manager-pause-145997" [80f89714-5130-4df9-b9b0-bd9cc6bd5b2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 20:15:02.340836   96119 system_pods.go:89] "kube-proxy-2vzps" [01869f53-a897-4a1a-b5be-ceafca2e105b] Running
	I1027 20:15:02.340845   96119 system_pods.go:89] "kube-scheduler-pause-145997" [f0a7175d-b419-42c1-b485-cb5330d0ed07] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 20:15:02.340856   96119 system_pods.go:126] duration metric: took 3.903432ms to wait for k8s-apps to be running ...
	I1027 20:15:02.340870   96119 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 20:15:02.340925   96119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 20:15:02.364236   96119 system_svc.go:56] duration metric: took 23.350055ms WaitForService to wait for kubelet
	I1027 20:15:02.364284   96119 kubeadm.go:586] duration metric: took 329.831656ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 20:15:02.364313   96119 node_conditions.go:102] verifying NodePressure condition ...
	I1027 20:15:02.369380   96119 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1027 20:15:02.369405   96119 node_conditions.go:123] node cpu capacity is 2
	I1027 20:15:02.369421   96119 node_conditions.go:105] duration metric: took 5.100585ms to run NodePressure ...
	I1027 20:15:02.369435   96119 start.go:241] waiting for startup goroutines ...
	I1027 20:15:02.369443   96119 start.go:246] waiting for cluster config update ...
	I1027 20:15:02.369452   96119 start.go:255] writing updated cluster config ...
	I1027 20:15:02.369770   96119 ssh_runner.go:195] Run: rm -f paused
	I1027 20:15:02.375485   96119 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 20:15:02.376524   96119 kapi.go:59] client config for pause-145997: &rest.Config{Host:"https://192.168.72.115:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21801-58821/.minikube/profiles/pause-145997/client.crt", KeyFile:"/home/jenkins/minikube-integration/21801-58821/.minikube/profiles/pause-145997/client.key", CAFile:"/home/jenkins/minikube-integration/21801-58821/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]st
ring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c4e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1027 20:15:02.381315   96119 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4qs4m" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 20:15:04.388537   96119 pod_ready.go:104] pod "coredns-66bc5c9577-4qs4m" is not "Ready", error: <nil>
	I1027 20:15:04.888730   96119 pod_ready.go:94] pod "coredns-66bc5c9577-4qs4m" is "Ready"
	I1027 20:15:04.888759   96119 pod_ready.go:86] duration metric: took 2.507419707s for pod "coredns-66bc5c9577-4qs4m" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:15:04.892547   96119 pod_ready.go:83] waiting for pod "etcd-pause-145997" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:15:04.586885   96708 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 20:15:04.586920   96708 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-58821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 20:15:04.586928   96708 cache.go:58] Caching tarball of preloaded images
	I1027 20:15:04.587044   96708 preload.go:233] Found /home/jenkins/minikube-integration/21801-58821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 20:15:04.587059   96708 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 20:15:04.587138   96708 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/kubernetes-upgrade-176362/config.json ...
	I1027 20:15:04.587355   96708 start.go:360] acquireMachinesLock for kubernetes-upgrade-176362: {Name:mk93a855054c8dcf81931234082a94fdc68a4726 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1027 20:15:07.603089   96708 start.go:364] duration metric: took 3.015688216s to acquireMachinesLock for "kubernetes-upgrade-176362"
	I1027 20:15:07.603166   96708 start.go:96] Skipping create...Using existing machine configuration
	I1027 20:15:07.603176   96708 fix.go:54] fixHost starting: 
	I1027 20:15:07.605511   96708 fix.go:112] recreateIfNeeded on kubernetes-upgrade-176362: state=Stopped err=<nil>
	W1027 20:15:07.605551   96708 fix.go:138] unexpected machine state, will restart: <nil>
	I1027 20:15:05.810027   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:05.810688   96282 main.go:141] libmachine: (stopped-upgrade-246578) found domain IP: 192.168.83.222
	I1027 20:15:05.810704   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has current primary IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:05.810709   96282 main.go:141] libmachine: (stopped-upgrade-246578) reserving static IP address...
	I1027 20:15:05.811192   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | unable to find host DHCP lease matching {name: "stopped-upgrade-246578", mac: "52:54:00:c1:7f:c4", ip: "192.168.83.222"} in network mk-stopped-upgrade-246578
	I1027 20:15:06.042920   96282 main.go:141] libmachine: (stopped-upgrade-246578) reserved static IP address 192.168.83.222 for domain stopped-upgrade-246578
	I1027 20:15:06.042937   96282 main.go:141] libmachine: (stopped-upgrade-246578) waiting for SSH...
	I1027 20:15:06.042956   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | Getting to WaitForSSH function...
	I1027 20:15:06.046557   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:06.047066   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7f:c4", ip: ""} in network mk-stopped-upgrade-246578: {Iface:virbr5 ExpiryTime:2025-10-27 21:15:02 +0000 UTC Type:0 Mac:52:54:00:c1:7f:c4 Iaid: IPaddr:192.168.83.222 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c1:7f:c4}
	I1027 20:15:06.047091   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:06.047240   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | Using SSH client type: external
	I1027 20:15:06.047267   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | Using SSH private key: /home/jenkins/minikube-integration/21801-58821/.minikube/machines/stopped-upgrade-246578/id_rsa (-rw-------)
	I1027 20:15:06.047305   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21801-58821/.minikube/machines/stopped-upgrade-246578/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1027 20:15:06.047320   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | About to run SSH command:
	I1027 20:15:06.047331   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | exit 0
	I1027 20:15:06.145202   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | SSH cmd err, output: <nil>: 
	I1027 20:15:06.145505   96282 main.go:141] libmachine: (stopped-upgrade-246578) domain creation complete
	I1027 20:15:06.145965   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetConfigRaw
	I1027 20:15:06.146542   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .DriverName
	I1027 20:15:06.146747   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .DriverName
	I1027 20:15:06.146937   96282 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1027 20:15:06.146950   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetState
	I1027 20:15:06.148533   96282 main.go:141] libmachine: Detecting operating system of created instance...
	I1027 20:15:06.148542   96282 main.go:141] libmachine: Waiting for SSH to be available...
	I1027 20:15:06.148559   96282 main.go:141] libmachine: Getting to WaitForSSH function...
	I1027 20:15:06.148565   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHHostname
	I1027 20:15:06.150937   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:06.151328   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7f:c4", ip: ""} in network mk-stopped-upgrade-246578: {Iface:virbr5 ExpiryTime:2025-10-27 21:15:02 +0000 UTC Type:0 Mac:52:54:00:c1:7f:c4 Iaid: IPaddr:192.168.83.222 Prefix:24 Hostname:stopped-upgrade-246578 Clientid:01:52:54:00:c1:7f:c4}
	I1027 20:15:06.151354   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:06.151486   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHPort
	I1027 20:15:06.151661   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHKeyPath
	I1027 20:15:06.151813   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHKeyPath
	I1027 20:15:06.151922   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHUsername
	I1027 20:15:06.152094   96282 main.go:141] libmachine: Using SSH client type: native
	I1027 20:15:06.152464   96282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.83.222 22 <nil> <nil>}
	I1027 20:15:06.152475   96282 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1027 20:15:06.273184   96282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 20:15:06.273198   96282 main.go:141] libmachine: Detecting the provisioner...
	I1027 20:15:06.273205   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHHostname
	I1027 20:15:06.276237   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:06.276577   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7f:c4", ip: ""} in network mk-stopped-upgrade-246578: {Iface:virbr5 ExpiryTime:2025-10-27 21:15:02 +0000 UTC Type:0 Mac:52:54:00:c1:7f:c4 Iaid: IPaddr:192.168.83.222 Prefix:24 Hostname:stopped-upgrade-246578 Clientid:01:52:54:00:c1:7f:c4}
	I1027 20:15:06.276603   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:06.276775   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHPort
	I1027 20:15:06.277004   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHKeyPath
	I1027 20:15:06.277221   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHKeyPath
	I1027 20:15:06.277365   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHUsername
	I1027 20:15:06.277529   96282 main.go:141] libmachine: Using SSH client type: native
	I1027 20:15:06.277864   96282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.83.222 22 <nil> <nil>}
	I1027 20:15:06.277874   96282 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1027 20:15:06.401931   96282 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb75713b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1027 20:15:06.401972   96282 main.go:141] libmachine: found compatible host: buildroot
	I1027 20:15:06.401977   96282 main.go:141] libmachine: Provisioning with buildroot...
	I1027 20:15:06.401985   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetMachineName
	I1027 20:15:06.402229   96282 buildroot.go:166] provisioning hostname "stopped-upgrade-246578"
	I1027 20:15:06.402243   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetMachineName
	I1027 20:15:06.402435   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHHostname
	I1027 20:15:06.404995   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:06.405330   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7f:c4", ip: ""} in network mk-stopped-upgrade-246578: {Iface:virbr5 ExpiryTime:2025-10-27 21:15:02 +0000 UTC Type:0 Mac:52:54:00:c1:7f:c4 Iaid: IPaddr:192.168.83.222 Prefix:24 Hostname:stopped-upgrade-246578 Clientid:01:52:54:00:c1:7f:c4}
	I1027 20:15:06.405352   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:06.405546   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHPort
	I1027 20:15:06.405724   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHKeyPath
	I1027 20:15:06.405861   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHKeyPath
	I1027 20:15:06.406005   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHUsername
	I1027 20:15:06.406159   96282 main.go:141] libmachine: Using SSH client type: native
	I1027 20:15:06.406490   96282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.83.222 22 <nil> <nil>}
	I1027 20:15:06.406498   96282 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-246578 && echo "stopped-upgrade-246578" | sudo tee /etc/hostname
	I1027 20:15:06.556193   96282 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-246578
	
	I1027 20:15:06.556214   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHHostname
	I1027 20:15:06.559703   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:06.560192   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7f:c4", ip: ""} in network mk-stopped-upgrade-246578: {Iface:virbr5 ExpiryTime:2025-10-27 21:15:02 +0000 UTC Type:0 Mac:52:54:00:c1:7f:c4 Iaid: IPaddr:192.168.83.222 Prefix:24 Hostname:stopped-upgrade-246578 Clientid:01:52:54:00:c1:7f:c4}
	I1027 20:15:06.560234   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:06.560506   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHPort
	I1027 20:15:06.560682   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHKeyPath
	I1027 20:15:06.560863   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHKeyPath
	I1027 20:15:06.561112   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHUsername
	I1027 20:15:06.561310   96282 main.go:141] libmachine: Using SSH client type: native
	I1027 20:15:06.561615   96282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.83.222 22 <nil> <nil>}
	I1027 20:15:06.561629   96282 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-246578' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-246578/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-246578' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 20:15:06.692296   96282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 20:15:06.692319   96282 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21801-58821/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-58821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-58821/.minikube}
	I1027 20:15:06.692355   96282 buildroot.go:174] setting up certificates
	I1027 20:15:06.692377   96282 provision.go:83] configureAuth start
	I1027 20:15:06.692387   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetMachineName
	I1027 20:15:06.692671   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetIP
	I1027 20:15:06.696113   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:06.696507   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7f:c4", ip: ""} in network mk-stopped-upgrade-246578: {Iface:virbr5 ExpiryTime:2025-10-27 21:15:02 +0000 UTC Type:0 Mac:52:54:00:c1:7f:c4 Iaid: IPaddr:192.168.83.222 Prefix:24 Hostname:stopped-upgrade-246578 Clientid:01:52:54:00:c1:7f:c4}
	I1027 20:15:06.696523   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:06.696723   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHHostname
	I1027 20:15:06.699449   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:06.699766   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7f:c4", ip: ""} in network mk-stopped-upgrade-246578: {Iface:virbr5 ExpiryTime:2025-10-27 21:15:02 +0000 UTC Type:0 Mac:52:54:00:c1:7f:c4 Iaid: IPaddr:192.168.83.222 Prefix:24 Hostname:stopped-upgrade-246578 Clientid:01:52:54:00:c1:7f:c4}
	I1027 20:15:06.699781   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:06.699925   96282 provision.go:138] copyHostCerts
	I1027 20:15:06.699974   96282 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-58821/.minikube/key.pem, removing ...
	I1027 20:15:06.699991   96282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-58821/.minikube/key.pem
	I1027 20:15:06.700094   96282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-58821/.minikube/key.pem (1675 bytes)
	I1027 20:15:06.700202   96282 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-58821/.minikube/ca.pem, removing ...
	I1027 20:15:06.700207   96282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-58821/.minikube/ca.pem
	I1027 20:15:06.700234   96282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-58821/.minikube/ca.pem (1078 bytes)
	I1027 20:15:06.700295   96282 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-58821/.minikube/cert.pem, removing ...
	I1027 20:15:06.700297   96282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-58821/.minikube/cert.pem
	I1027 20:15:06.700318   96282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-58821/.minikube/cert.pem (1123 bytes)
	I1027 20:15:06.700361   96282 provision.go:112] generating server cert: /home/jenkins/minikube-integration/21801-58821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-246578 san=[192.168.83.222 192.168.83.222 localhost 127.0.0.1 minikube stopped-upgrade-246578]
	I1027 20:15:06.849826   96282 provision.go:172] copyRemoteCerts
	I1027 20:15:06.849874   96282 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 20:15:06.849913   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHHostname
	I1027 20:15:06.853169   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:06.853605   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7f:c4", ip: ""} in network mk-stopped-upgrade-246578: {Iface:virbr5 ExpiryTime:2025-10-27 21:15:02 +0000 UTC Type:0 Mac:52:54:00:c1:7f:c4 Iaid: IPaddr:192.168.83.222 Prefix:24 Hostname:stopped-upgrade-246578 Clientid:01:52:54:00:c1:7f:c4}
	I1027 20:15:06.853637   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:06.853896   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHPort
	I1027 20:15:06.854118   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHKeyPath
	I1027 20:15:06.854260   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHUsername
	I1027 20:15:06.854405   96282 sshutil.go:53] new ssh client: &{IP:192.168.83.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/stopped-upgrade-246578/id_rsa Username:docker}
	I1027 20:15:06.944530   96282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 20:15:06.966875   96282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 20:15:06.987586   96282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1027 20:15:07.010450   96282 provision.go:86] duration metric: configureAuth took 318.061898ms
	I1027 20:15:07.010468   96282 buildroot.go:189] setting minikube options for container-runtime
	I1027 20:15:07.010668   96282 config.go:182] Loaded profile config "stopped-upgrade-246578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1027 20:15:07.010776   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHHostname
	I1027 20:15:07.014139   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:07.014545   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7f:c4", ip: ""} in network mk-stopped-upgrade-246578: {Iface:virbr5 ExpiryTime:2025-10-27 21:15:02 +0000 UTC Type:0 Mac:52:54:00:c1:7f:c4 Iaid: IPaddr:192.168.83.222 Prefix:24 Hostname:stopped-upgrade-246578 Clientid:01:52:54:00:c1:7f:c4}
	I1027 20:15:07.014565   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:07.014888   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHPort
	I1027 20:15:07.015127   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHKeyPath
	I1027 20:15:07.015285   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHKeyPath
	I1027 20:15:07.015470   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHUsername
	I1027 20:15:07.015653   96282 main.go:141] libmachine: Using SSH client type: native
	I1027 20:15:07.016109   96282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.83.222 22 <nil> <nil>}
	I1027 20:15:07.016125   96282 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 20:15:07.336063   96282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 20:15:07.336083   96282 main.go:141] libmachine: Checking connection to Docker...
	I1027 20:15:07.336092   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetURL
	I1027 20:15:07.337504   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | using libvirt version 8000000
	I1027 20:15:07.340241   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:07.340580   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7f:c4", ip: ""} in network mk-stopped-upgrade-246578: {Iface:virbr5 ExpiryTime:2025-10-27 21:15:02 +0000 UTC Type:0 Mac:52:54:00:c1:7f:c4 Iaid: IPaddr:192.168.83.222 Prefix:24 Hostname:stopped-upgrade-246578 Clientid:01:52:54:00:c1:7f:c4}
	I1027 20:15:07.340603   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:07.340760   96282 main.go:141] libmachine: Docker is up and running!
	I1027 20:15:07.340771   96282 main.go:141] libmachine: Reticulating splines...
	I1027 20:15:07.340778   96282 client.go:171] LocalClient.Create took 21.925141515s
	I1027 20:15:07.340797   96282 start.go:167] duration metric: libmachine.API.Create for "stopped-upgrade-246578" took 21.925209104s
	I1027 20:15:07.340803   96282 start.go:300] post-start starting for "stopped-upgrade-246578" (driver="kvm2")
	I1027 20:15:07.340827   96282 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 20:15:07.340840   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .DriverName
	I1027 20:15:07.341078   96282 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 20:15:07.341094   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHHostname
	I1027 20:15:07.343429   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:07.343792   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7f:c4", ip: ""} in network mk-stopped-upgrade-246578: {Iface:virbr5 ExpiryTime:2025-10-27 21:15:02 +0000 UTC Type:0 Mac:52:54:00:c1:7f:c4 Iaid: IPaddr:192.168.83.222 Prefix:24 Hostname:stopped-upgrade-246578 Clientid:01:52:54:00:c1:7f:c4}
	I1027 20:15:07.343816   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:07.344022   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHPort
	I1027 20:15:07.344195   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHKeyPath
	I1027 20:15:07.344380   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHUsername
	I1027 20:15:07.344514   96282 sshutil.go:53] new ssh client: &{IP:192.168.83.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/stopped-upgrade-246578/id_rsa Username:docker}
	I1027 20:15:07.435501   96282 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 20:15:07.439398   96282 info.go:137] Remote host: Buildroot 2021.02.12
	I1027 20:15:07.439411   96282 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-58821/.minikube/addons for local assets ...
	I1027 20:15:07.439466   96282 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-58821/.minikube/files for local assets ...
	I1027 20:15:07.439547   96282 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-58821/.minikube/files/etc/ssl/certs/627052.pem -> 627052.pem in /etc/ssl/certs
	I1027 20:15:07.439631   96282 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 20:15:07.447247   96282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/files/etc/ssl/certs/627052.pem --> /etc/ssl/certs/627052.pem (1708 bytes)
	I1027 20:15:07.468745   96282 start.go:303] post-start completed in 127.930976ms
	I1027 20:15:07.468790   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetConfigRaw
	I1027 20:15:07.469439   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetIP
	I1027 20:15:07.472529   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:07.472831   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7f:c4", ip: ""} in network mk-stopped-upgrade-246578: {Iface:virbr5 ExpiryTime:2025-10-27 21:15:02 +0000 UTC Type:0 Mac:52:54:00:c1:7f:c4 Iaid: IPaddr:192.168.83.222 Prefix:24 Hostname:stopped-upgrade-246578 Clientid:01:52:54:00:c1:7f:c4}
	I1027 20:15:07.472855   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:07.473070   96282 profile.go:148] Saving config to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/config.json ...
	I1027 20:15:07.473245   96282 start.go:128] duration metric: createHost completed in 22.081515892s
	I1027 20:15:07.473262   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHHostname
	I1027 20:15:07.475765   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:07.476237   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7f:c4", ip: ""} in network mk-stopped-upgrade-246578: {Iface:virbr5 ExpiryTime:2025-10-27 21:15:02 +0000 UTC Type:0 Mac:52:54:00:c1:7f:c4 Iaid: IPaddr:192.168.83.222 Prefix:24 Hostname:stopped-upgrade-246578 Clientid:01:52:54:00:c1:7f:c4}
	I1027 20:15:07.476265   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:07.476513   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHPort
	I1027 20:15:07.476719   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHKeyPath
	I1027 20:15:07.476939   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHKeyPath
	I1027 20:15:07.477118   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHUsername
	I1027 20:15:07.477295   96282 main.go:141] libmachine: Using SSH client type: native
	I1027 20:15:07.477746   96282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.83.222 22 <nil> <nil>}
	I1027 20:15:07.477756   96282 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1027 20:15:07.602885   96282 main.go:141] libmachine: SSH cmd err, output: <nil>: 1761596107.570464034
	
	I1027 20:15:07.602898   96282 fix.go:206] guest clock: 1761596107.570464034
	I1027 20:15:07.602903   96282 fix.go:219] Guest: 2025-10-27 20:15:07.570464034 +0000 UTC Remote: 2025-10-27 20:15:07.473250399 +0000 UTC m=+47.984902949 (delta=97.213635ms)
	I1027 20:15:07.602967   96282 fix.go:190] guest clock delta is within tolerance: 97.213635ms
	I1027 20:15:07.602972   96282 start.go:83] releasing machines lock for "stopped-upgrade-246578", held for 22.211430644s
	I1027 20:15:07.603004   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .DriverName
	I1027 20:15:07.603303   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetIP
	I1027 20:15:07.607089   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:07.607511   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7f:c4", ip: ""} in network mk-stopped-upgrade-246578: {Iface:virbr5 ExpiryTime:2025-10-27 21:15:02 +0000 UTC Type:0 Mac:52:54:00:c1:7f:c4 Iaid: IPaddr:192.168.83.222 Prefix:24 Hostname:stopped-upgrade-246578 Clientid:01:52:54:00:c1:7f:c4}
	I1027 20:15:07.607538   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:07.607753   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .DriverName
	I1027 20:15:07.608310   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .DriverName
	I1027 20:15:07.608500   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .DriverName
	I1027 20:15:07.608626   96282 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 20:15:07.608666   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHHostname
	I1027 20:15:07.608734   96282 ssh_runner.go:195] Run: cat /version.json
	I1027 20:15:07.608754   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHHostname
	I1027 20:15:07.612785   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:07.612808   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:07.613320   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7f:c4", ip: ""} in network mk-stopped-upgrade-246578: {Iface:virbr5 ExpiryTime:2025-10-27 21:15:02 +0000 UTC Type:0 Mac:52:54:00:c1:7f:c4 Iaid: IPaddr:192.168.83.222 Prefix:24 Hostname:stopped-upgrade-246578 Clientid:01:52:54:00:c1:7f:c4}
	I1027 20:15:07.613356   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7f:c4", ip: ""} in network mk-stopped-upgrade-246578: {Iface:virbr5 ExpiryTime:2025-10-27 21:15:02 +0000 UTC Type:0 Mac:52:54:00:c1:7f:c4 Iaid: IPaddr:192.168.83.222 Prefix:24 Hostname:stopped-upgrade-246578 Clientid:01:52:54:00:c1:7f:c4}
	I1027 20:15:07.613380   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:07.613393   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:07.613567   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHPort
	I1027 20:15:07.613587   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHPort
	I1027 20:15:07.613767   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHKeyPath
	I1027 20:15:07.613843   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHKeyPath
	I1027 20:15:07.613974   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHUsername
	I1027 20:15:07.614042   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetSSHUsername
	I1027 20:15:07.614129   96282 sshutil.go:53] new ssh client: &{IP:192.168.83.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/stopped-upgrade-246578/id_rsa Username:docker}
	I1027 20:15:07.614213   96282 sshutil.go:53] new ssh client: &{IP:192.168.83.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/stopped-upgrade-246578/id_rsa Username:docker}
	I1027 20:15:07.725402   96282 ssh_runner.go:195] Run: systemctl --version
	I1027 20:15:07.730958   96282 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 20:15:07.887642   96282 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 20:15:07.894972   96282 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 20:15:07.895055   96282 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 20:15:07.910658   96282 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1027 20:15:07.910674   96282 start.go:472] detecting cgroup driver to use...
	I1027 20:15:07.910753   96282 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 20:15:07.923537   96282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 20:15:07.935968   96282 docker.go:203] disabling cri-docker service (if available) ...
	I1027 20:15:07.936018   96282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 20:15:07.949195   96282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 20:15:07.961490   96282 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 20:15:08.066455   96282 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 20:15:08.194428   96282 docker.go:219] disabling docker service ...
	I1027 20:15:08.194501   96282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 20:15:08.206962   96282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 20:15:08.218578   96282 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 20:15:08.340254   96282 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 20:15:08.457710   96282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 20:15:08.470774   96282 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 20:15:08.489461   96282 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1027 20:15:08.489532   96282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:15:08.499829   96282 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 20:15:08.499892   96282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:15:08.509112   96282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:15:08.517943   96282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:15:08.527355   96282 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 20:15:08.537556   96282 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 20:15:08.548561   96282 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1027 20:15:08.548611   96282 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1027 20:15:08.564195   96282 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 20:15:08.573419   96282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:15:08.689825   96282 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 20:15:08.874054   96282 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 20:15:08.874123   96282 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 20:15:08.881103   96282 start.go:540] Will wait 60s for crictl version
	I1027 20:15:08.881168   96282 ssh_runner.go:195] Run: which crictl
	I1027 20:15:08.885642   96282 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1027 20:15:08.941811   96282 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1027 20:15:08.941889   96282 ssh_runner.go:195] Run: crio --version
	I1027 20:15:08.992511   96282 ssh_runner.go:195] Run: crio --version
	I1027 20:15:09.056062   96282 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.1 ...
	I1027 20:15:07.607801   96708 out.go:252] * Restarting existing kvm2 VM for "kubernetes-upgrade-176362" ...
	I1027 20:15:07.607850   96708 main.go:141] libmachine: starting domain...
	I1027 20:15:07.607867   96708 main.go:141] libmachine: ensuring networks are active...
	I1027 20:15:07.608762   96708 main.go:141] libmachine: Ensuring network default is active
	I1027 20:15:07.609606   96708 main.go:141] libmachine: Ensuring network mk-kubernetes-upgrade-176362 is active
	I1027 20:15:07.610483   96708 main.go:141] libmachine: getting domain XML...
	I1027 20:15:07.612234   96708 main.go:141] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>kubernetes-upgrade-176362</name>
	  <uuid>e148d596-c2b6-4fd1-9e6b-e918d164691e</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21801-58821/.minikube/machines/kubernetes-upgrade-176362/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21801-58821/.minikube/machines/kubernetes-upgrade-176362/kubernetes-upgrade-176362.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:57:ab:c2'/>
	      <source network='mk-kubernetes-upgrade-176362'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:e6:58:09'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1027 20:15:09.031292   96708 main.go:141] libmachine: waiting for domain to start...
	I1027 20:15:09.033104   96708 main.go:141] libmachine: domain is now running
	I1027 20:15:09.033130   96708 main.go:141] libmachine: waiting for IP...
	I1027 20:15:09.034267   96708 main.go:141] libmachine: domain kubernetes-upgrade-176362 has defined MAC address 52:54:00:57:ab:c2 in network mk-kubernetes-upgrade-176362
	I1027 20:15:09.034928   96708 main.go:141] libmachine: domain kubernetes-upgrade-176362 has current primary IP address 192.168.61.42 and MAC address 52:54:00:57:ab:c2 in network mk-kubernetes-upgrade-176362
	I1027 20:15:09.034944   96708 main.go:141] libmachine: found domain IP: 192.168.61.42
	I1027 20:15:09.034949   96708 main.go:141] libmachine: reserving static IP address...
	I1027 20:15:09.035359   96708 main.go:141] libmachine: found host DHCP lease matching {name: "kubernetes-upgrade-176362", mac: "52:54:00:57:ab:c2", ip: "192.168.61.42"} in network mk-kubernetes-upgrade-176362: {Iface:virbr3 ExpiryTime:2025-10-27 21:14:32 +0000 UTC Type:0 Mac:52:54:00:57:ab:c2 Iaid: IPaddr:192.168.61.42 Prefix:24 Hostname:kubernetes-upgrade-176362 Clientid:01:52:54:00:57:ab:c2}
	I1027 20:15:09.035395   96708 main.go:141] libmachine: skip adding static IP to network mk-kubernetes-upgrade-176362 - found existing host DHCP lease matching {name: "kubernetes-upgrade-176362", mac: "52:54:00:57:ab:c2", ip: "192.168.61.42"}
	I1027 20:15:09.035407   96708 main.go:141] libmachine: reserved static IP address 192.168.61.42 for domain kubernetes-upgrade-176362
	I1027 20:15:09.035415   96708 main.go:141] libmachine: waiting for SSH...
	I1027 20:15:09.035423   96708 main.go:141] libmachine: Getting to WaitForSSH function...
	I1027 20:15:09.037760   96708 main.go:141] libmachine: domain kubernetes-upgrade-176362 has defined MAC address 52:54:00:57:ab:c2 in network mk-kubernetes-upgrade-176362
	I1027 20:15:09.038174   96708 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:57:ab:c2", ip: ""} in network mk-kubernetes-upgrade-176362: {Iface:virbr3 ExpiryTime:2025-10-27 21:14:32 +0000 UTC Type:0 Mac:52:54:00:57:ab:c2 Iaid: IPaddr:192.168.61.42 Prefix:24 Hostname:kubernetes-upgrade-176362 Clientid:01:52:54:00:57:ab:c2}
	I1027 20:15:09.038221   96708 main.go:141] libmachine: domain kubernetes-upgrade-176362 has defined IP address 192.168.61.42 and MAC address 52:54:00:57:ab:c2 in network mk-kubernetes-upgrade-176362
	I1027 20:15:09.038480   96708 main.go:141] libmachine: Using SSH client type: native
	I1027 20:15:09.038721   96708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.42 22 <nil> <nil>}
	I1027 20:15:09.038733   96708 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1027 20:15:09.057576   96282 main.go:141] libmachine: (stopped-upgrade-246578) Calling .GetIP
	I1027 20:15:09.061310   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:09.061703   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:7f:c4", ip: ""} in network mk-stopped-upgrade-246578: {Iface:virbr5 ExpiryTime:2025-10-27 21:15:02 +0000 UTC Type:0 Mac:52:54:00:c1:7f:c4 Iaid: IPaddr:192.168.83.222 Prefix:24 Hostname:stopped-upgrade-246578 Clientid:01:52:54:00:c1:7f:c4}
	I1027 20:15:09.061724   96282 main.go:141] libmachine: (stopped-upgrade-246578) DBG | domain stopped-upgrade-246578 has defined IP address 192.168.83.222 and MAC address 52:54:00:c1:7f:c4 in network mk-stopped-upgrade-246578
	I1027 20:15:09.061989   96282 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1027 20:15:09.065891   96282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 20:15:09.077422   96282 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1027 20:15:09.077469   96282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 20:15:09.116175   96282 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I1027 20:15:09.116231   96282 ssh_runner.go:195] Run: which lz4
	I1027 20:15:09.120394   96282 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1027 20:15:09.124438   96282 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1027 20:15:09.124456   96282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	W1027 20:15:06.898563   96119 pod_ready.go:104] pod "etcd-pause-145997" is not "Ready", error: <nil>
	I1027 20:15:08.902685   96119 pod_ready.go:94] pod "etcd-pause-145997" is "Ready"
	I1027 20:15:08.902729   96119 pod_ready.go:86] duration metric: took 4.010150553s for pod "etcd-pause-145997" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:15:08.905784   96119 pod_ready.go:83] waiting for pod "kube-apiserver-pause-145997" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:15:08.913576   96119 pod_ready.go:94] pod "kube-apiserver-pause-145997" is "Ready"
	I1027 20:15:08.913610   96119 pod_ready.go:86] duration metric: took 7.79699ms for pod "kube-apiserver-pause-145997" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:15:08.916604   96119 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-145997" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 20:15:10.925854   96119 pod_ready.go:104] pod "kube-controller-manager-pause-145997" is not "Ready", error: <nil>
	I1027 20:15:12.097380   96708 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.42:22: connect: no route to host
	I1027 20:15:10.887190   96282 crio.go:444] Took 1.766838 seconds to copy over tarball
	I1027 20:15:10.887295   96282 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1027 20:15:14.028904   96282 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.141579507s)
	I1027 20:15:14.028924   96282 crio.go:451] Took 3.141714 seconds to extract the tarball
	I1027 20:15:14.028936   96282 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1027 20:15:14.071776   96282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 20:15:14.151505   96282 crio.go:496] all images are preloaded for cri-o runtime.
	I1027 20:15:14.151517   96282 cache_images.go:84] Images are preloaded, skipping loading
	I1027 20:15:14.151589   96282 ssh_runner.go:195] Run: crio config
	I1027 20:15:14.215896   96282 cni.go:84] Creating CNI manager for ""
	I1027 20:15:14.215909   96282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 20:15:14.215927   96282 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1027 20:15:14.215958   96282 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.222 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-246578 NodeName:stopped-upgrade-246578 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.222 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 20:15:14.216133   96282 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.222
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "stopped-upgrade-246578"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.222
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.222"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 20:15:14.216225   96282 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=stopped-upgrade-246578 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-246578 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1027 20:15:14.216293   96282 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1027 20:15:14.225783   96282 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 20:15:14.225860   96282 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 20:15:14.234791   96282 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1027 20:15:14.251029   96282 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 20:15:14.268212   96282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I1027 20:15:14.284083   96282 ssh_runner.go:195] Run: grep 192.168.83.222	control-plane.minikube.internal$ /etc/hosts
	I1027 20:15:14.287791   96282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.222	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 20:15:14.301196   96282 certs.go:56] Setting up /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578 for IP: 192.168.83.222
	I1027 20:15:14.301227   96282 certs.go:190] acquiring lock for shared ca certs: {Name:mk3c1c890b4611f9f1a3f97b9046837227a16799 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:15:14.301444   96282 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/21801-58821/.minikube/ca.key
	I1027 20:15:14.301492   96282 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.key
	I1027 20:15:14.301557   96282 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/client.key
	I1027 20:15:14.301568   96282 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/client.crt with IP's: []
	I1027 20:15:14.489307   96282 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/client.crt ...
	I1027 20:15:14.489323   96282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/client.crt: {Name:mk755ae076ac43dc43189a5fb5358bcae2fe7a04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:15:14.489517   96282 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/client.key ...
	I1027 20:15:14.489528   96282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/client.key: {Name:mk1dc8a073e66d33f7a98f520571a41100e4505a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:15:14.489638   96282 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/apiserver.key.8a8d8ec9
	I1027 20:15:14.489650   96282 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/apiserver.crt.8a8d8ec9 with IP's: [192.168.83.222 10.96.0.1 127.0.0.1 10.0.0.1]
	I1027 20:15:14.594480   96282 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/apiserver.crt.8a8d8ec9 ...
	I1027 20:15:14.600621   96282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/apiserver.crt.8a8d8ec9: {Name:mk19fca7a59711d23b2be8d803a7b7e574e9f9d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:15:14.600825   96282 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/apiserver.key.8a8d8ec9 ...
	I1027 20:15:14.600837   96282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/apiserver.key.8a8d8ec9: {Name:mkfd081bbc2ed80d30a3842a5527caf2b2c0e583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:15:14.600948   96282 certs.go:337] copying /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/apiserver.crt.8a8d8ec9 -> /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/apiserver.crt
	I1027 20:15:14.601093   96282 certs.go:341] copying /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/apiserver.key.8a8d8ec9 -> /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/apiserver.key
	I1027 20:15:14.601190   96282 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/proxy-client.key
	I1027 20:15:14.601216   96282 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/proxy-client.crt with IP's: []
	I1027 20:15:14.877587   96282 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/proxy-client.crt ...
	I1027 20:15:14.877607   96282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/proxy-client.crt: {Name:mk1d482ce21ea3fcfe3b8b544a03273f71518acd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:15:14.877815   96282 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/proxy-client.key ...
	I1027 20:15:14.877830   96282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/proxy-client.key: {Name:mk8c9ddf15b2b0adf0da95b462fdcce69accd502 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:15:14.878023   96282 certs.go:437] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/home/jenkins/minikube-integration/21801-58821/.minikube/certs/62705.pem (1338 bytes)
	W1027 20:15:14.878070   96282 certs.go:433] ignoring /home/jenkins/minikube-integration/21801-58821/.minikube/certs/home/jenkins/minikube-integration/21801-58821/.minikube/certs/62705_empty.pem, impossibly tiny 0 bytes
	I1027 20:15:14.878078   96282 certs.go:437] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca-key.pem (1679 bytes)
	I1027 20:15:14.878107   96282 certs.go:437] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem (1078 bytes)
	I1027 20:15:14.878126   96282 certs.go:437] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/home/jenkins/minikube-integration/21801-58821/.minikube/certs/cert.pem (1123 bytes)
	I1027 20:15:14.878152   96282 certs.go:437] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/home/jenkins/minikube-integration/21801-58821/.minikube/certs/key.pem (1675 bytes)
	I1027 20:15:14.878186   96282 certs.go:437] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/21801-58821/.minikube/files/etc/ssl/certs/627052.pem (1708 bytes)
	I1027 20:15:14.878780   96282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1027 20:15:14.905701   96282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 20:15:14.929825   96282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 20:15:14.954620   96282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/stopped-upgrade-246578/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 20:15:14.980957   96282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 20:15:15.006702   96282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 20:15:15.029601   96282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 20:15:15.057212   96282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1027 20:15:15.084554   96282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/files/etc/ssl/certs/627052.pem --> /usr/share/ca-certificates/627052.pem (1708 bytes)
	I1027 20:15:15.110594   96282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 20:15:15.136286   96282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/certs/62705.pem --> /usr/share/ca-certificates/62705.pem (1338 bytes)
	I1027 20:15:15.166301   96282 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 20:15:15.183740   96282 ssh_runner.go:195] Run: openssl version
	I1027 20:15:15.189564   96282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/627052.pem && ln -fs /usr/share/ca-certificates/627052.pem /etc/ssl/certs/627052.pem"
	I1027 20:15:15.199445   96282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/627052.pem
	I1027 20:15:15.203896   96282 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:09 /usr/share/ca-certificates/627052.pem
	I1027 20:15:15.203943   96282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/627052.pem
	I1027 20:15:15.209855   96282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/627052.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 20:15:15.220182   96282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 20:15:15.230744   96282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:15:15.235609   96282 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:15:15.235654   96282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:15:15.241084   96282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 20:15:15.250910   96282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/62705.pem && ln -fs /usr/share/ca-certificates/62705.pem /etc/ssl/certs/62705.pem"
	I1027 20:15:15.260760   96282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/62705.pem
	I1027 20:15:15.265971   96282 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:09 /usr/share/ca-certificates/62705.pem
	I1027 20:15:15.266030   96282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/62705.pem
	I1027 20:15:15.273540   96282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/62705.pem /etc/ssl/certs/51391683.0"
	I1027 20:15:15.284412   96282 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1027 20:15:15.288765   96282 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1027 20:15:15.288829   96282 kubeadm.go:404] StartCluster: {Name:stopped-upgrade-246578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-246578 Name
space:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.222 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1027 20:15:15.288923   96282 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 20:15:15.289000   96282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 20:15:15.336268   96282 cri.go:89] found id: ""
	I1027 20:15:15.336336   96282 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 20:15:15.346609   96282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 20:15:15.356305   96282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 20:15:15.366380   96282 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 20:15:15.366417   96282 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1027 20:15:15.440218   96282 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1027 20:15:15.440277   96282 kubeadm.go:322] [preflight] Running pre-flight checks
	I1027 20:15:15.602046   96282 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 20:15:15.602186   96282 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 20:15:15.602334   96282 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1027 20:15:15.849703   96282 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 20:15:12.425507   96119 pod_ready.go:94] pod "kube-controller-manager-pause-145997" is "Ready"
	I1027 20:15:12.425549   96119 pod_ready.go:86] duration metric: took 3.508916207s for pod "kube-controller-manager-pause-145997" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:15:12.429697   96119 pod_ready.go:83] waiting for pod "kube-proxy-2vzps" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:15:12.435281   96119 pod_ready.go:94] pod "kube-proxy-2vzps" is "Ready"
	I1027 20:15:12.435312   96119 pod_ready.go:86] duration metric: took 5.557418ms for pod "kube-proxy-2vzps" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:15:12.438660   96119 pod_ready.go:83] waiting for pod "kube-scheduler-pause-145997" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 20:15:14.445813   96119 pod_ready.go:104] pod "kube-scheduler-pause-145997" is not "Ready", error: <nil>
	I1027 20:15:16.102986   96119 pod_ready.go:94] pod "kube-scheduler-pause-145997" is "Ready"
	I1027 20:15:16.103023   96119 pod_ready.go:86] duration metric: took 3.664335129s for pod "kube-scheduler-pause-145997" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:15:16.103058   96119 pod_ready.go:40] duration metric: took 13.72752845s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 20:15:16.153280   96119 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 20:15:16.206577   96119 out.go:179] * Done! kubectl is now configured to use "pause-145997" cluster and "default" namespace by default
	I1027 20:15:15.958599   96282 out.go:204]   - Generating certificates and keys ...
	I1027 20:15:15.958740   96282 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1027 20:15:15.958860   96282 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1027 20:15:15.958982   96282 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 20:15:16.134449   96282 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1027 20:15:16.377956   96282 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1027 20:15:16.520908   96282 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1027 20:15:16.650288   96282 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1027 20:15:16.650727   96282 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost stopped-upgrade-246578] and IPs [192.168.83.222 127.0.0.1 ::1]
	I1027 20:15:16.788147   96282 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1027 20:15:16.788354   96282 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost stopped-upgrade-246578] and IPs [192.168.83.222 127.0.0.1 ::1]
	I1027 20:15:16.857234   96282 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 20:15:17.056263   96282 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 20:15:17.425938   96282 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1027 20:15:17.426439   96282 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 20:15:17.640015   96282 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 20:15:17.705183   96282 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 20:15:17.798845   96282 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 20:15:17.890861   96282 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 20:15:17.891769   96282 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 20:15:17.894730   96282 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Oct 27 20:15:19 pause-145997 crio[2838]: time="2025-10-27 20:15:19.884636987Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761596119884595513,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=430f3ac4-27a7-493e-8286-0731f9dcc2cc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 20:15:19 pause-145997 crio[2838]: time="2025-10-27 20:15:19.885668053Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0c52986-9e1e-49b6-920c-f959dc40cb7e name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 20:15:19 pause-145997 crio[2838]: time="2025-10-27 20:15:19.885833206Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0c52986-9e1e-49b6-920c-f959dc40cb7e name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 20:15:19 pause-145997 crio[2838]: time="2025-10-27 20:15:19.886244976Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf534bdbf58dd599671a66c54a6c649259aaa51f304334c708aa8dc1e84e82f2,PodSandboxId:70925c8a68ab964f50fdd8afc1abb18e86c67f877a8115d17fc6bd7a8928425e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761596101235915519,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4qs4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92c6d26c-1ff4-4a98-b0f6-963244a8a802,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69d91656fd6e4394466a5c7ea98154b54ea6285173eb592a88a21d74ebb5c104,PodSandboxId:44d0a969409735462de9a3a698817537af734d07a550bceeb1ad0c9c2b4ce8b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761596100990355573,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2vzps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01869f53-a897-4a1a-b5be-ceafca2e105b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17aaf053be8fe790f733518dc047995ba09979759f285b06bdb9aa50da6d1c4a,PodSandboxId:f84f54fa556b6b50dd5b89b4133b7bedca44d5610a7711e02c0acb940485b6d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761596096111718267,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7978fd7c60ce7274985091dc8bc428f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32fdda72d1f3e3c77707054ccc9df8291686fb4736e4d3bc4f25ffb6249f5846,PodSandboxId:cbc1df63a2d7f4ecb6b0b59053743306b74617aa2c32f19cc0c2fa9009177763,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c283373
88d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761596096139574279,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5723c50c3a0cc61b3bdf541867db4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c474d221a72dda544da83f00d27e1a654e604411e2a1387afc6d6e8126f660,PodSandboxId:59e2bb88f2cbfe809ed5c8593501ae20f9050e865ac5e39b1feacd694084a043,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761596096099941766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd75f8f8b3c018aefd65cc6e9837f750,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c1d044ae6575530689a8969a4f11c42ef57f8fc1f2a96ca5cbcab22ed998f39,PodSandboxId:1d26ac923453bf72cc9bdc1f3160bc67c99d6f2f7743bc84097396144af1eac7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Imag
e:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761596090258335964,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dbd60e6128b2bcb6ef173322a403223,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:921e5f1ba1c1b8110a1f33cd1ada6314ca2ed64bf4b8c5d850a6c716894257ca,PodSandboxId:59e2bb88f2cbfe809ed5c8593501ae20f9050e865ac5e3
9b1feacd694084a043,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761596089661596786,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd75f8f8b3c018aefd65cc6e9837f750,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a139373de7fd05501114a4
995b989c4548d3ce9876179050be3b9f77ea24633a,PodSandboxId:cbc1df63a2d7f4ecb6b0b59053743306b74617aa2c32f19cc0c2fa9009177763,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761596089654549201,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5723c50c3a0cc61b3bdf541867db4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fc82ec50104917bca670b2e4ac750cd14acc60d37d1675ff8b4e216c8d44a9a,PodSandboxId:f84f54fa556b6b50dd5b89b4133b7bedca44d5610a7711e02c0acb940485b6d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761596089644221791,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7978fd7c60ce7274985091dc8bc428f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59a7505d232443bca3d02905fc8c66a4ee8f6a5d5097c42c474de3698a8cd5bf,PodSandboxId:c34606fc8cf74781d081c72a7a145eb9e0aab5c862d800e0db485829fcd71ab1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761596042787123308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4qs4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92c6d26c-1ff4-4a98-b0f6-963244a8a802,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns
\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4fe26d13f640798c6ea20dd6f7972ec4eae0c5d48d146fd3606e571a939563c,PodSandboxId:76b8dcf42698b5242d0cc43c49cd27e14b6c3605b4d34a8fca47529942f93f84,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:17615
96041297297523,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2vzps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01869f53-a897-4a1a-b5be-ceafca2e105b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f5d1271f2e4f30bdedf83cb576646bc27f6c3a809fec4473a753ba2f1601af1,PodSandboxId:03c81f73b76a7cab6a80ca5ad4d3eb2d83dd2fb19745f7cb0c8f9ca5e1cba3fe,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761596029670447348,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dbd60e6128b2bcb6ef173322a403223,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c0c52986-9e1e-49b6-920c-f959dc40cb7e name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 20:15:19 pause-145997 crio[2838]: time="2025-10-27 20:15:19.969286949Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=32b6e655-0b02-460b-a0b8-2970df4fc3dc name=/runtime.v1.RuntimeService/Version
	Oct 27 20:15:19 pause-145997 crio[2838]: time="2025-10-27 20:15:19.969539273Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=32b6e655-0b02-460b-a0b8-2970df4fc3dc name=/runtime.v1.RuntimeService/Version
	Oct 27 20:15:19 pause-145997 crio[2838]: time="2025-10-27 20:15:19.971449701Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=53516a4b-6953-47f2-8f3a-2c87271f750e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 20:15:19 pause-145997 crio[2838]: time="2025-10-27 20:15:19.972552021Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761596119972517639,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=53516a4b-6953-47f2-8f3a-2c87271f750e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 20:15:19 pause-145997 crio[2838]: time="2025-10-27 20:15:19.973435230Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59b6d2a7-1f4a-47dc-8f3f-c898bfb34afc name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 20:15:19 pause-145997 crio[2838]: time="2025-10-27 20:15:19.973538027Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59b6d2a7-1f4a-47dc-8f3f-c898bfb34afc name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 20:15:19 pause-145997 crio[2838]: time="2025-10-27 20:15:19.973952156Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf534bdbf58dd599671a66c54a6c649259aaa51f304334c708aa8dc1e84e82f2,PodSandboxId:70925c8a68ab964f50fdd8afc1abb18e86c67f877a8115d17fc6bd7a8928425e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761596101235915519,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4qs4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92c6d26c-1ff4-4a98-b0f6-963244a8a802,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69d91656fd6e4394466a5c7ea98154b54ea6285173eb592a88a21d74ebb5c104,PodSandboxId:44d0a969409735462de9a3a698817537af734d07a550bceeb1ad0c9c2b4ce8b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761596100990355573,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2vzps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01869f53-a897-4a1a-b5be-ceafca2e105b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17aaf053be8fe790f733518dc047995ba09979759f285b06bdb9aa50da6d1c4a,PodSandboxId:f84f54fa556b6b50dd5b89b4133b7bedca44d5610a7711e02c0acb940485b6d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761596096111718267,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7978fd7c60ce7274985091dc8bc428f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32fdda72d1f3e3c77707054ccc9df8291686fb4736e4d3bc4f25ffb6249f5846,PodSandboxId:cbc1df63a2d7f4ecb6b0b59053743306b74617aa2c32f19cc0c2fa9009177763,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c283373
88d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761596096139574279,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5723c50c3a0cc61b3bdf541867db4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c474d221a72dda544da83f00d27e1a654e604411e2a1387afc6d6e8126f660,PodSandboxId:59e2bb88f2cbfe809ed5c8593501ae20f9050e865ac5e39b1feacd694084a043,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761596096099941766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd75f8f8b3c018aefd65cc6e9837f750,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c1d044ae6575530689a8969a4f11c42ef57f8fc1f2a96ca5cbcab22ed998f39,PodSandboxId:1d26ac923453bf72cc9bdc1f3160bc67c99d6f2f7743bc84097396144af1eac7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Imag
e:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761596090258335964,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dbd60e6128b2bcb6ef173322a403223,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:921e5f1ba1c1b8110a1f33cd1ada6314ca2ed64bf4b8c5d850a6c716894257ca,PodSandboxId:59e2bb88f2cbfe809ed5c8593501ae20f9050e865ac5e3
9b1feacd694084a043,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761596089661596786,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd75f8f8b3c018aefd65cc6e9837f750,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a139373de7fd05501114a4
995b989c4548d3ce9876179050be3b9f77ea24633a,PodSandboxId:cbc1df63a2d7f4ecb6b0b59053743306b74617aa2c32f19cc0c2fa9009177763,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761596089654549201,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5723c50c3a0cc61b3bdf541867db4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fc82ec50104917bca670b2e4ac750cd14acc60d37d1675ff8b4e216c8d44a9a,PodSandboxId:f84f54fa556b6b50dd5b89b4133b7bedca44d5610a7711e02c0acb940485b6d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761596089644221791,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7978fd7c60ce7274985091dc8bc428f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59a7505d232443bca3d02905fc8c66a4ee8f6a5d5097c42c474de3698a8cd5bf,PodSandboxId:c34606fc8cf74781d081c72a7a145eb9e0aab5c862d800e0db485829fcd71ab1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761596042787123308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4qs4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92c6d26c-1ff4-4a98-b0f6-963244a8a802,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns
\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4fe26d13f640798c6ea20dd6f7972ec4eae0c5d48d146fd3606e571a939563c,PodSandboxId:76b8dcf42698b5242d0cc43c49cd27e14b6c3605b4d34a8fca47529942f93f84,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:17615
96041297297523,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2vzps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01869f53-a897-4a1a-b5be-ceafca2e105b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f5d1271f2e4f30bdedf83cb576646bc27f6c3a809fec4473a753ba2f1601af1,PodSandboxId:03c81f73b76a7cab6a80ca5ad4d3eb2d83dd2fb19745f7cb0c8f9ca5e1cba3fe,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761596029670447348,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dbd60e6128b2bcb6ef173322a403223,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=59b6d2a7-1f4a-47dc-8f3f-c898bfb34afc name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 20:15:20 pause-145997 crio[2838]: time="2025-10-27 20:15:20.049772159Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bf6370b4-4079-42c8-8612-5bfc2ce232ce name=/runtime.v1.RuntimeService/Version
	Oct 27 20:15:20 pause-145997 crio[2838]: time="2025-10-27 20:15:20.049940288Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf6370b4-4079-42c8-8612-5bfc2ce232ce name=/runtime.v1.RuntimeService/Version
	Oct 27 20:15:20 pause-145997 crio[2838]: time="2025-10-27 20:15:20.051755177Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a852a97f-bc6b-44c0-adf3-3816d5c8ab8e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 20:15:20 pause-145997 crio[2838]: time="2025-10-27 20:15:20.052672941Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761596120052638742,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a852a97f-bc6b-44c0-adf3-3816d5c8ab8e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 20:15:20 pause-145997 crio[2838]: time="2025-10-27 20:15:20.053690566Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9161c359-e6fe-465c-aa4e-5c458774eaa0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 20:15:20 pause-145997 crio[2838]: time="2025-10-27 20:15:20.054246816Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9161c359-e6fe-465c-aa4e-5c458774eaa0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 20:15:20 pause-145997 crio[2838]: time="2025-10-27 20:15:20.055980147Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf534bdbf58dd599671a66c54a6c649259aaa51f304334c708aa8dc1e84e82f2,PodSandboxId:70925c8a68ab964f50fdd8afc1abb18e86c67f877a8115d17fc6bd7a8928425e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761596101235915519,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4qs4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92c6d26c-1ff4-4a98-b0f6-963244a8a802,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69d91656fd6e4394466a5c7ea98154b54ea6285173eb592a88a21d74ebb5c104,PodSandboxId:44d0a969409735462de9a3a698817537af734d07a550bceeb1ad0c9c2b4ce8b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761596100990355573,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2vzps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01869f53-a897-4a1a-b5be-ceafca2e105b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17aaf053be8fe790f733518dc047995ba09979759f285b06bdb9aa50da6d1c4a,PodSandboxId:f84f54fa556b6b50dd5b89b4133b7bedca44d5610a7711e02c0acb940485b6d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761596096111718267,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7978fd7c60ce7274985091dc8bc428f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32fdda72d1f3e3c77707054ccc9df8291686fb4736e4d3bc4f25ffb6249f5846,PodSandboxId:cbc1df63a2d7f4ecb6b0b59053743306b74617aa2c32f19cc0c2fa9009177763,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c283373
88d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761596096139574279,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5723c50c3a0cc61b3bdf541867db4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c474d221a72dda544da83f00d27e1a654e604411e2a1387afc6d6e8126f660,PodSandboxId:59e2bb88f2cbfe809ed5c8593501ae20f9050e865ac5e39b1feacd694084a043,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761596096099941766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd75f8f8b3c018aefd65cc6e9837f750,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c1d044ae6575530689a8969a4f11c42ef57f8fc1f2a96ca5cbcab22ed998f39,PodSandboxId:1d26ac923453bf72cc9bdc1f3160bc67c99d6f2f7743bc84097396144af1eac7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Imag
e:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761596090258335964,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dbd60e6128b2bcb6ef173322a403223,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:921e5f1ba1c1b8110a1f33cd1ada6314ca2ed64bf4b8c5d850a6c716894257ca,PodSandboxId:59e2bb88f2cbfe809ed5c8593501ae20f9050e865ac5e3
9b1feacd694084a043,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761596089661596786,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd75f8f8b3c018aefd65cc6e9837f750,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a139373de7fd05501114a4
995b989c4548d3ce9876179050be3b9f77ea24633a,PodSandboxId:cbc1df63a2d7f4ecb6b0b59053743306b74617aa2c32f19cc0c2fa9009177763,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761596089654549201,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5723c50c3a0cc61b3bdf541867db4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fc82ec50104917bca670b2e4ac750cd14acc60d37d1675ff8b4e216c8d44a9a,PodSandboxId:f84f54fa556b6b50dd5b89b4133b7bedca44d5610a7711e02c0acb940485b6d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761596089644221791,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7978fd7c60ce7274985091dc8bc428f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59a7505d232443bca3d02905fc8c66a4ee8f6a5d5097c42c474de3698a8cd5bf,PodSandboxId:c34606fc8cf74781d081c72a7a145eb9e0aab5c862d800e0db485829fcd71ab1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761596042787123308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4qs4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92c6d26c-1ff4-4a98-b0f6-963244a8a802,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns
\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4fe26d13f640798c6ea20dd6f7972ec4eae0c5d48d146fd3606e571a939563c,PodSandboxId:76b8dcf42698b5242d0cc43c49cd27e14b6c3605b4d34a8fca47529942f93f84,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:17615
96041297297523,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2vzps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01869f53-a897-4a1a-b5be-ceafca2e105b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f5d1271f2e4f30bdedf83cb576646bc27f6c3a809fec4473a753ba2f1601af1,PodSandboxId:03c81f73b76a7cab6a80ca5ad4d3eb2d83dd2fb19745f7cb0c8f9ca5e1cba3fe,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761596029670447348,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dbd60e6128b2bcb6ef173322a403223,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9161c359-e6fe-465c-aa4e-5c458774eaa0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 20:15:20 pause-145997 crio[2838]: time="2025-10-27 20:15:20.133471354Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f9357254-7692-4de0-a171-72d6f98b202a name=/runtime.v1.RuntimeService/Version
	Oct 27 20:15:20 pause-145997 crio[2838]: time="2025-10-27 20:15:20.133714911Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f9357254-7692-4de0-a171-72d6f98b202a name=/runtime.v1.RuntimeService/Version
	Oct 27 20:15:20 pause-145997 crio[2838]: time="2025-10-27 20:15:20.135048532Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8dea367c-0ade-4548-90d4-98f32bafc3a3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 20:15:20 pause-145997 crio[2838]: time="2025-10-27 20:15:20.135500149Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761596120135475749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8dea367c-0ade-4548-90d4-98f32bafc3a3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 27 20:15:20 pause-145997 crio[2838]: time="2025-10-27 20:15:20.136293392Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e4e311c7-df0e-4735-ac9b-ae279feabd07 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 20:15:20 pause-145997 crio[2838]: time="2025-10-27 20:15:20.136378220Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e4e311c7-df0e-4735-ac9b-ae279feabd07 name=/runtime.v1.RuntimeService/ListContainers
	Oct 27 20:15:20 pause-145997 crio[2838]: time="2025-10-27 20:15:20.136609189Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bf534bdbf58dd599671a66c54a6c649259aaa51f304334c708aa8dc1e84e82f2,PodSandboxId:70925c8a68ab964f50fdd8afc1abb18e86c67f877a8115d17fc6bd7a8928425e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761596101235915519,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4qs4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92c6d26c-1ff4-4a98-b0f6-963244a8a802,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69d91656fd6e4394466a5c7ea98154b54ea6285173eb592a88a21d74ebb5c104,PodSandboxId:44d0a969409735462de9a3a698817537af734d07a550bceeb1ad0c9c2b4ce8b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761596100990355573,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2vzps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01869f53-a897-4a1a-b5be-ceafca2e105b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17aaf053be8fe790f733518dc047995ba09979759f285b06bdb9aa50da6d1c4a,PodSandboxId:f84f54fa556b6b50dd5b89b4133b7bedca44d5610a7711e02c0acb940485b6d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761596096111718267,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7978fd7c60ce7274985091dc8bc428f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32fdda72d1f3e3c77707054ccc9df8291686fb4736e4d3bc4f25ffb6249f5846,PodSandboxId:cbc1df63a2d7f4ecb6b0b59053743306b74617aa2c32f19cc0c2fa9009177763,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c283373
88d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761596096139574279,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5723c50c3a0cc61b3bdf541867db4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09c474d221a72dda544da83f00d27e1a654e604411e2a1387afc6d6e8126f660,PodSandboxId:59e2bb88f2cbfe809ed5c8593501ae20f9050e865ac5e39b1feacd694084a043,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761596096099941766,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd75f8f8b3c018aefd65cc6e9837f750,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c1d044ae6575530689a8969a4f11c42ef57f8fc1f2a96ca5cbcab22ed998f39,PodSandboxId:1d26ac923453bf72cc9bdc1f3160bc67c99d6f2f7743bc84097396144af1eac7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Imag
e:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761596090258335964,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dbd60e6128b2bcb6ef173322a403223,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:921e5f1ba1c1b8110a1f33cd1ada6314ca2ed64bf4b8c5d850a6c716894257ca,PodSandboxId:59e2bb88f2cbfe809ed5c8593501ae20f9050e865ac5e3
9b1feacd694084a043,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761596089661596786,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd75f8f8b3c018aefd65cc6e9837f750,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a139373de7fd05501114a4
995b989c4548d3ce9876179050be3b9f77ea24633a,PodSandboxId:cbc1df63a2d7f4ecb6b0b59053743306b74617aa2c32f19cc0c2fa9009177763,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761596089654549201,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5723c50c3a0cc61b3bdf541867db4a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fc82ec50104917bca670b2e4ac750cd14acc60d37d1675ff8b4e216c8d44a9a,PodSandboxId:f84f54fa556b6b50dd5b89b4133b7bedca44d5610a7711e02c0acb940485b6d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761596089644221791,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7978fd7c60ce7274985091dc8bc428f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59a7505d232443bca3d02905fc8c66a4ee8f6a5d5097c42c474de3698a8cd5bf,PodSandboxId:c34606fc8cf74781d081c72a7a145eb9e0aab5c862d800e0db485829fcd71ab1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761596042787123308,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4qs4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92c6d26c-1ff4-4a98-b0f6-963244a8a802,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns
\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4fe26d13f640798c6ea20dd6f7972ec4eae0c5d48d146fd3606e571a939563c,PodSandboxId:76b8dcf42698b5242d0cc43c49cd27e14b6c3605b4d34a8fca47529942f93f84,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:17615
96041297297523,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2vzps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01869f53-a897-4a1a-b5be-ceafca2e105b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f5d1271f2e4f30bdedf83cb576646bc27f6c3a809fec4473a753ba2f1601af1,PodSandboxId:03c81f73b76a7cab6a80ca5ad4d3eb2d83dd2fb19745f7cb0c8f9ca5e1cba3fe,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761596029670447348,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-145997,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dbd60e6128b2bcb6ef173322a403223,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e4e311c7-df0e-4735-ac9b-ae279feabd07 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	bf534bdbf58dd       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   18 seconds ago       Running             coredns                   1                   70925c8a68ab9       coredns-66bc5c9577-4qs4m
	69d91656fd6e4       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   19 seconds ago       Running             kube-proxy                1                   44d0a96940973       kube-proxy-2vzps
	32fdda72d1f3e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   24 seconds ago       Running             kube-controller-manager   2                   cbc1df63a2d7f       kube-controller-manager-pause-145997
	17aaf053be8fe       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   24 seconds ago       Running             kube-scheduler            2                   f84f54fa556b6       kube-scheduler-pause-145997
	09c474d221a72       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   24 seconds ago       Running             kube-apiserver            2                   59e2bb88f2cbf       kube-apiserver-pause-145997
	7c1d044ae6575       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   29 seconds ago       Running             etcd                      1                   1d26ac923453b       etcd-pause-145997
	921e5f1ba1c1b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   30 seconds ago       Exited              kube-apiserver            1                   59e2bb88f2cbf       kube-apiserver-pause-145997
	a139373de7fd0       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   30 seconds ago       Exited              kube-controller-manager   1                   cbc1df63a2d7f       kube-controller-manager-pause-145997
	4fc82ec501049       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   30 seconds ago       Exited              kube-scheduler            1                   f84f54fa556b6       kube-scheduler-pause-145997
	59a7505d23244       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   0                   c34606fc8cf74       coredns-66bc5c9577-4qs4m
	f4fe26d13f640       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   About a minute ago   Exited              kube-proxy                0                   76b8dcf42698b       kube-proxy-2vzps
	8f5d1271f2e4f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   About a minute ago   Exited              etcd                      0                   03c81f73b76a7       etcd-pause-145997
	
	
	==> coredns [59a7505d232443bca3d02905fc8c66a4ee8f6a5d5097c42c474de3698a8cd5bf] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bf534bdbf58dd599671a66c54a6c649259aaa51f304334c708aa8dc1e84e82f2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36125 - 44107 "HINFO IN 5402909375033114884.5487682932040252686. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018346463s
	
	
	==> describe nodes <==
	Name:               pause-145997
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-145997
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=pause-145997
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T20_13_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 20:13:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-145997
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 20:15:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 20:14:59 +0000   Mon, 27 Oct 2025 20:13:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 20:14:59 +0000   Mon, 27 Oct 2025 20:13:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 20:14:59 +0000   Mon, 27 Oct 2025 20:13:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 20:14:59 +0000   Mon, 27 Oct 2025 20:13:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.115
	  Hostname:    pause-145997
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 259375e4d11443748d324a099e09148b
	  System UUID:                259375e4-d114-4374-8d32-4a099e09148b
	  Boot ID:                    7aac5f79-0ec7-4f01-89c8-40c006ad9883
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-4qs4m                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     79s
	  kube-system                 etcd-pause-145997                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         85s
	  kube-system                 kube-apiserver-pause-145997             250m (12%)    0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-controller-manager-pause-145997    200m (10%)    0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-proxy-2vzps                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-scheduler-pause-145997             100m (5%)     0 (0%)      0 (0%)           0 (0%)         85s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 78s                kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  NodeHasSufficientPID     92s (x7 over 92s)  kubelet          Node pause-145997 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    92s (x8 over 92s)  kubelet          Node pause-145997 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  92s (x8 over 92s)  kubelet          Node pause-145997 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  92s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 85s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  85s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  85s                kubelet          Node pause-145997 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    85s                kubelet          Node pause-145997 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     85s                kubelet          Node pause-145997 status is now: NodeHasSufficientPID
	  Normal  NodeReady                84s                kubelet          Node pause-145997 status is now: NodeReady
	  Normal  RegisteredNode           81s                node-controller  Node pause-145997 event: Registered Node pause-145997 in Controller
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)  kubelet          Node pause-145997 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)  kubelet          Node pause-145997 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)  kubelet          Node pause-145997 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18s                node-controller  Node pause-145997 event: Registered Node pause-145997 in Controller
	
	
	==> dmesg <==
	[Oct27 20:13] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001520] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005029] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.170941] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000022] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000006] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.092663] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.136411] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.119830] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.159885] kauditd_printk_skb: 171 callbacks suppressed
	[Oct27 20:14] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.123963] kauditd_printk_skb: 228 callbacks suppressed
	[  +0.114818] kauditd_printk_skb: 38 callbacks suppressed
	[  +4.487528] kauditd_printk_skb: 216 callbacks suppressed
	[Oct27 20:15] kauditd_printk_skb: 85 callbacks suppressed
	
	
	==> etcd [7c1d044ae6575530689a8969a4f11c42ef57f8fc1f2a96ca5cbcab22ed998f39] <==
	{"level":"warn","ts":"2025-10-27T20:14:58.684111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:14:58.694163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:14:58.704668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:14:58.714244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:14:58.725245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:14:58.732635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:14:58.744398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:14:58.750965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:14:58.769451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:14:58.790022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:14:58.797124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:14:58.805570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:14:58.814416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:14:58.870982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52712","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-27T20:15:15.708131Z","caller":"traceutil/trace.go:172","msg":"trace[26727978] linearizableReadLoop","detail":"{readStateIndex:574; appliedIndex:574; }","duration":"108.676348ms","start":"2025-10-27T20:15:15.599433Z","end":"2025-10-27T20:15:15.708110Z","steps":["trace[26727978] 'read index received'  (duration: 108.670403ms)","trace[26727978] 'applied index is now lower than readState.Index'  (duration: 4.974µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T20:15:15.708303Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.846875ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T20:15:15.708351Z","caller":"traceutil/trace.go:172","msg":"trace[610763209] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:530; }","duration":"108.913721ms","start":"2025-10-27T20:15:15.599427Z","end":"2025-10-27T20:15:15.708341Z","steps":["trace[610763209] 'agreement among raft nodes before linearized reading'  (duration: 108.818491ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T20:15:15.709324Z","caller":"traceutil/trace.go:172","msg":"trace[609209266] transaction","detail":"{read_only:false; response_revision:531; number_of_response:1; }","duration":"216.56273ms","start":"2025-10-27T20:15:15.492747Z","end":"2025-10-27T20:15:15.709310Z","steps":["trace[609209266] 'process raft request'  (duration: 215.687998ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T20:15:16.086870Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"239.166402ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15752072140304252444 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-145997\" mod_revision:531 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-145997\" value_size:4777 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-145997\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-27T20:15:16.087029Z","caller":"traceutil/trace.go:172","msg":"trace[1784464789] linearizableReadLoop","detail":"{readStateIndex:576; appliedIndex:575; }","duration":"151.82843ms","start":"2025-10-27T20:15:15.935187Z","end":"2025-10-27T20:15:16.087015Z","steps":["trace[1784464789] 'read index received'  (duration: 92.522µs)","trace[1784464789] 'applied index is now lower than readState.Index'  (duration: 151.734458ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T20:15:16.087309Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"152.132549ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-pause-145997\" limit:1 ","response":"range_response_count:1 size:4854"}
	{"level":"info","ts":"2025-10-27T20:15:16.087405Z","caller":"traceutil/trace.go:172","msg":"trace[988378114] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-pause-145997; range_end:; response_count:1; response_revision:532; }","duration":"152.22942ms","start":"2025-10-27T20:15:15.935155Z","end":"2025-10-27T20:15:16.087384Z","steps":["trace[988378114] 'agreement among raft nodes before linearized reading'  (duration: 151.944266ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T20:15:16.087565Z","caller":"traceutil/trace.go:172","msg":"trace[895888739] transaction","detail":"{read_only:false; response_revision:532; number_of_response:1; }","duration":"364.16099ms","start":"2025-10-27T20:15:15.723390Z","end":"2025-10-27T20:15:16.087551Z","steps":["trace[895888739] 'process raft request'  (duration: 123.587981ms)","trace[895888739] 'compare'  (duration: 238.964328ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T20:15:16.087670Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-27T20:15:15.723361Z","time spent":"364.255569ms","remote":"127.0.0.1:51950","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4839,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-145997\" mod_revision:531 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-145997\" value_size:4777 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-145997\" > >"}
	{"level":"warn","ts":"2025-10-27T20:15:16.449489Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.803073ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15752072140304252447 > lease_revoke:<id:5a9a9a274f297cf6>","response":"size:28"}
	
	
	==> etcd [8f5d1271f2e4f30bdedf83cb576646bc27f6c3a809fec4473a753ba2f1601af1] <==
	{"level":"warn","ts":"2025-10-27T20:14:02.836378Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"283.968443ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:4413"}
	{"level":"info","ts":"2025-10-27T20:14:02.836390Z","caller":"traceutil/trace.go:172","msg":"trace[1115120290] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:390; }","duration":"283.983789ms","start":"2025-10-27T20:14:02.552403Z","end":"2025-10-27T20:14:02.836387Z","steps":["trace[1115120290] 'agreement among raft nodes before linearized reading'  (duration: 283.920486ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T20:14:02.836453Z","caller":"traceutil/trace.go:172","msg":"trace[1786370370] transaction","detail":"{read_only:false; response_revision:389; number_of_response:1; }","duration":"398.731546ms","start":"2025-10-27T20:14:02.437717Z","end":"2025-10-27T20:14:02.836449Z","steps":["trace[1786370370] 'process raft request'  (duration: 398.255802ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T20:14:02.836485Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-27T20:14:02.437703Z","time spent":"398.761666ms","remote":"127.0.0.1:39392","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":788,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns-66bc5c9577-4qs4m.187272446ada171d\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-66bc5c9577-4qs4m.187272446ada171d\" value_size:700 lease:6528700103433962165 >> failure:<>"}
	{"level":"warn","ts":"2025-10-27T20:14:02.843612Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"402.246283ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T20:14:02.843831Z","caller":"traceutil/trace.go:172","msg":"trace[655704997] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:387; }","duration":"402.446907ms","start":"2025-10-27T20:14:02.441326Z","end":"2025-10-27T20:14:02.843773Z","steps":["trace[655704997] 'agreement among raft nodes before linearized reading'  (duration: 117.049555ms)","trace[655704997] 'range keys from in-memory index tree'  (duration: 276.829138ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T20:14:02.844549Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-27T20:14:02.441314Z","time spent":"403.21472ms","remote":"127.0.0.1:39566","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-10-27T20:14:39.717554Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-27T20:14:39.717706Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-145997","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.115:2380"],"advertise-client-urls":["https://192.168.72.115:2379"]}
	{"level":"error","ts":"2025-10-27T20:14:39.723203Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-27T20:14:39.802763Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-27T20:14:39.804290Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T20:14:39.804342Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ef80b93f13cda9a","current-leader-member-id":"ef80b93f13cda9a"}
	{"level":"info","ts":"2025-10-27T20:14:39.804413Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-27T20:14:39.804428Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-27T20:14:39.804432Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-27T20:14:39.804500Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-27T20:14:39.804514Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-27T20:14:39.804551Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.72.115:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-27T20:14:39.804558Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.72.115:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-27T20:14:39.804563Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.72.115:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T20:14:39.807550Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.72.115:2380"}
	{"level":"error","ts":"2025-10-27T20:14:39.807610Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.72.115:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T20:14:39.807728Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.72.115:2380"}
	{"level":"info","ts":"2025-10-27T20:14:39.807752Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-145997","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.115:2380"],"advertise-client-urls":["https://192.168.72.115:2379"]}
	
	
	==> kernel <==
	 20:15:20 up 2 min,  0 users,  load average: 0.81, 0.40, 0.15
	Linux pause-145997 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [09c474d221a72dda544da83f00d27e1a654e604411e2a1387afc6d6e8126f660] <==
	I1027 20:14:59.668220       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1027 20:14:59.668256       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1027 20:14:59.677118       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1027 20:14:59.677549       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1027 20:14:59.677655       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 20:14:59.703352       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1027 20:14:59.704254       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1027 20:14:59.704861       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 20:14:59.705093       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1027 20:14:59.705111       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1027 20:14:59.707003       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 20:14:59.735059       1 aggregator.go:171] initial CRD sync complete...
	I1027 20:14:59.735097       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 20:14:59.735182       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 20:14:59.735209       1 cache.go:39] Caches are synced for autoregister controller
	I1027 20:15:00.475631       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 20:15:00.536486       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1027 20:15:01.067270       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.72.115]
	I1027 20:15:01.073324       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 20:15:01.098707       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 20:15:01.856733       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 20:15:01.933763       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1027 20:15:01.977034       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 20:15:01.987321       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 20:15:02.965949       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-apiserver [921e5f1ba1c1b8110a1f33cd1ada6314ca2ed64bf4b8c5d850a6c716894257ca] <==
	I1027 20:14:53.197147       1 controller.go:120] Shutting down OpenAPI V3 controller
	I1027 20:14:53.197164       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I1027 20:14:53.197572       1 controller.go:132] Ending legacy_token_tracking_controller
	I1027 20:14:53.197640       1 controller.go:133] Shutting down legacy_token_tracking_controller
	I1027 20:14:53.197661       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	E1027 20:14:53.197714       1 cache.go:35] "Unhandled Error" err="Unable to sync caches for RemoteAvailability controller" logger="UnhandledError"
	E1027 20:14:53.197758       1 cache.go:35] "Unhandled Error" err="Unable to sync caches for LocalAvailability controller" logger="UnhandledError"
	I1027 20:14:53.197857       1 customresource_discovery_controller.go:332] Shutting down DiscoveryController
	E1027 20:14:53.197878       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="cluster_authentication_trust_controller"
	E1027 20:14:53.197921       1 cache.go:35] "Unhandled Error" err="Unable to sync caches for APIServiceRegistrationController controller" logger="UnhandledError"
	F1027 20:14:53.198013       1 hooks.go:204] PostStartHook "priority-and-fairness-config-producer" failed: APF bootstrap ensurer timed out waiting for cache sync
	I1027 20:14:53.301911       1 local_available_controller.go:164] Shutting down LocalAvailability controller
	I1027 20:14:53.302025       1 cluster_authentication_trust_controller.go:467] Shutting down cluster_authentication_trust_controller controller
	I1027 20:14:53.302042       1 apiservice_controller.go:104] Shutting down APIServiceRegistrationController
	I1027 20:14:53.302122       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 20:14:53.302135       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1027 20:14:53.302217       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1027 20:14:53.302427       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I1027 20:14:53.302498       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1027 20:14:53.302545       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I1027 20:14:53.302585       1 object_count_tracker.go:141] "StorageObjectCountTracker pruner is exiting"
	I1027 20:14:53.302648       1 controller.go:84] Shutting down OpenAPI AggregationController
	I1027 20:14:53.196705       1 establishing_controller.go:92] Shutting down EstablishingController
	I1027 20:14:53.197155       1 autoregister_controller.go:168] Shutting down autoregister controller
	I1027 20:14:53.198049       1 remote_available_controller.go:433] Shutting down RemoteAvailability controller
	
	
	==> kube-controller-manager [32fdda72d1f3e3c77707054ccc9df8291686fb4736e4d3bc4f25ffb6249f5846] <==
	I1027 20:15:02.967229       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 20:15:02.972589       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1027 20:15:02.972635       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 20:15:02.981134       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 20:15:02.983553       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 20:15:02.988939       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 20:15:02.990214       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 20:15:02.990567       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 20:15:02.990595       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 20:15:02.990601       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 20:15:02.997989       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1027 20:15:02.998135       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 20:15:03.003773       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1027 20:15:03.007507       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1027 20:15:03.007571       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 20:15:03.007685       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1027 20:15:03.007774       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1027 20:15:03.008842       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1027 20:15:03.008888       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 20:15:03.011377       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 20:15:03.011540       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 20:15:03.011656       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 20:15:03.011673       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 20:15:03.015201       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 20:15:03.017447       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	
	
	==> kube-controller-manager [a139373de7fd05501114a4995b989c4548d3ce9876179050be3b9f77ea24633a] <==
	I1027 20:14:51.324084       1 serving.go:386] Generated self-signed cert in-memory
	I1027 20:14:51.533512       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1027 20:14:51.533559       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 20:14:51.536336       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1027 20:14:51.537669       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1027 20:14:51.537886       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 20:14:51.537986       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-proxy [69d91656fd6e4394466a5c7ea98154b54ea6285173eb592a88a21d74ebb5c104] <==
	I1027 20:15:01.306197       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 20:15:01.407001       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 20:15:01.407184       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.115"]
	E1027 20:15:01.407261       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 20:15:01.455561       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1027 20:15:01.455658       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1027 20:15:01.455688       1 server_linux.go:132] "Using iptables Proxier"
	I1027 20:15:01.468980       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 20:15:01.469357       1 server.go:527] "Version info" version="v1.34.1"
	I1027 20:15:01.469647       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 20:15:01.484023       1 config.go:200] "Starting service config controller"
	I1027 20:15:01.484210       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 20:15:01.484244       1 config.go:309] "Starting node config controller"
	I1027 20:15:01.485760       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 20:15:01.485955       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 20:15:01.484733       1 config.go:106] "Starting endpoint slice config controller"
	I1027 20:15:01.486216       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 20:15:01.484744       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 20:15:01.486945       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 20:15:01.585906       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 20:15:01.587325       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 20:15:01.588609       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [f4fe26d13f640798c6ea20dd6f7972ec4eae0c5d48d146fd3606e571a939563c] <==
	I1027 20:14:01.682871       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 20:14:01.786767       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 20:14:01.786856       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.115"]
	E1027 20:14:01.786965       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 20:14:02.011859       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1027 20:14:02.012943       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1027 20:14:02.013090       1 server_linux.go:132] "Using iptables Proxier"
	I1027 20:14:02.119684       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 20:14:02.120478       1 server.go:527] "Version info" version="v1.34.1"
	I1027 20:14:02.120852       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 20:14:02.128740       1 config.go:200] "Starting service config controller"
	I1027 20:14:02.128950       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 20:14:02.129071       1 config.go:106] "Starting endpoint slice config controller"
	I1027 20:14:02.129094       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 20:14:02.129297       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 20:14:02.129303       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 20:14:02.135563       1 config.go:309] "Starting node config controller"
	I1027 20:14:02.135595       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 20:14:02.135602       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 20:14:02.229830       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 20:14:02.229922       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 20:14:02.230131       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [17aaf053be8fe790f733518dc047995ba09979759f285b06bdb9aa50da6d1c4a] <==
	I1027 20:14:57.369121       1 serving.go:386] Generated self-signed cert in-memory
	W1027 20:14:59.495475       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 20:14:59.496209       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 20:14:59.496272       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 20:14:59.496280       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 20:14:59.590446       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 20:14:59.591878       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 20:14:59.600597       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:14:59.600722       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:14:59.601295       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 20:14:59.601314       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 20:14:59.705197       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [4fc82ec50104917bca670b2e4ac750cd14acc60d37d1675ff8b4e216c8d44a9a] <==
	E1027 20:14:53.144951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 20:14:53.145080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 20:14:53.146821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 20:14:53.147747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 20:14:53.148679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 20:14:53.150463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 20:14:53.150574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 20:14:53.150679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 20:14:53.147684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 20:14:53.151094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1027 20:14:53.151201       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 20:14:53.151276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 20:14:53.151389       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 20:14:53.151487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 20:14:53.151551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 20:14:53.151719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 20:14:53.154557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 20:14:53.609515       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	I1027 20:14:53.609920       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1027 20:14:53.609964       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1027 20:14:53.610029       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	E1027 20:14:53.609916       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:14:53.610167       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:14:53.610669       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1027 20:14:53.610750       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 27 20:14:57 pause-145997 kubelet[3496]: E1027 20:14:57.621105    3496 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-145997\" not found" node="pause-145997"
	Oct 27 20:14:57 pause-145997 kubelet[3496]: E1027 20:14:57.623632    3496 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-145997\" not found" node="pause-145997"
	Oct 27 20:14:58 pause-145997 kubelet[3496]: E1027 20:14:58.623268    3496 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-145997\" not found" node="pause-145997"
	Oct 27 20:14:58 pause-145997 kubelet[3496]: E1027 20:14:58.626580    3496 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-145997\" not found" node="pause-145997"
	Oct 27 20:14:59 pause-145997 kubelet[3496]: I1027 20:14:59.744059    3496 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-145997"
	Oct 27 20:14:59 pause-145997 kubelet[3496]: I1027 20:14:59.748208    3496 kubelet_node_status.go:124] "Node was previously registered" node="pause-145997"
	Oct 27 20:14:59 pause-145997 kubelet[3496]: I1027 20:14:59.748292    3496 kubelet_node_status.go:78] "Successfully registered node" node="pause-145997"
	Oct 27 20:14:59 pause-145997 kubelet[3496]: I1027 20:14:59.748326    3496 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 27 20:14:59 pause-145997 kubelet[3496]: I1027 20:14:59.751140    3496 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 27 20:14:59 pause-145997 kubelet[3496]: E1027 20:14:59.792423    3496 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-145997\" already exists" pod="kube-system/etcd-pause-145997"
	Oct 27 20:14:59 pause-145997 kubelet[3496]: I1027 20:14:59.792471    3496 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-145997"
	Oct 27 20:14:59 pause-145997 kubelet[3496]: E1027 20:14:59.808239    3496 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-145997\" already exists" pod="kube-system/kube-apiserver-pause-145997"
	Oct 27 20:14:59 pause-145997 kubelet[3496]: I1027 20:14:59.808855    3496 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-145997"
	Oct 27 20:14:59 pause-145997 kubelet[3496]: E1027 20:14:59.822264    3496 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-145997\" already exists" pod="kube-system/kube-controller-manager-pause-145997"
	Oct 27 20:14:59 pause-145997 kubelet[3496]: I1027 20:14:59.822298    3496 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-145997"
	Oct 27 20:14:59 pause-145997 kubelet[3496]: E1027 20:14:59.843914    3496 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-145997\" already exists" pod="kube-system/kube-scheduler-pause-145997"
	Oct 27 20:15:00 pause-145997 kubelet[3496]: I1027 20:15:00.384398    3496 apiserver.go:52] "Watching apiserver"
	Oct 27 20:15:00 pause-145997 kubelet[3496]: I1027 20:15:00.444106    3496 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 27 20:15:00 pause-145997 kubelet[3496]: I1027 20:15:00.531936    3496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01869f53-a897-4a1a-b5be-ceafca2e105b-lib-modules\") pod \"kube-proxy-2vzps\" (UID: \"01869f53-a897-4a1a-b5be-ceafca2e105b\") " pod="kube-system/kube-proxy-2vzps"
	Oct 27 20:15:00 pause-145997 kubelet[3496]: I1027 20:15:00.531966    3496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01869f53-a897-4a1a-b5be-ceafca2e105b-xtables-lock\") pod \"kube-proxy-2vzps\" (UID: \"01869f53-a897-4a1a-b5be-ceafca2e105b\") " pod="kube-system/kube-proxy-2vzps"
	Oct 27 20:15:04 pause-145997 kubelet[3496]: I1027 20:15:04.614462    3496 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 27 20:15:05 pause-145997 kubelet[3496]: E1027 20:15:05.583455    3496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761596105582839077  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 27 20:15:05 pause-145997 kubelet[3496]: E1027 20:15:05.583496    3496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761596105582839077  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 27 20:15:15 pause-145997 kubelet[3496]: E1027 20:15:15.585557    3496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761596115585147376  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 27 20:15:15 pause-145997 kubelet[3496]: E1027 20:15:15.585582    3496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761596115585147376  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-145997 -n pause-145997
helpers_test.go:269: (dbg) Run:  kubectl --context pause-145997 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (70.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (943.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-764820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-764820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: exit status 80 (15m43.516329293s)

                                                
                                                
-- stdout --
	* [calico-764820] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21801
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21801-58821/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-58821/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "calico-764820" primary control-plane node in "calico-764820" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 20:21:54.098812  102532 out.go:360] Setting OutFile to fd 1 ...
	I1027 20:21:54.099142  102532 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:21:54.099162  102532 out.go:374] Setting ErrFile to fd 2...
	I1027 20:21:54.099170  102532 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:21:54.099515  102532 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-58821/.minikube/bin
	I1027 20:21:54.100226  102532 out.go:368] Setting JSON to false
	I1027 20:21:54.101707  102532 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":11064,"bootTime":1761585450,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 20:21:54.101858  102532 start.go:141] virtualization: kvm guest
	I1027 20:21:54.179317  102532 out.go:179] * [calico-764820] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 20:21:54.239583  102532 notify.go:220] Checking for updates...
	I1027 20:21:54.240375  102532 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 20:21:54.291342  102532 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 20:21:54.375512  102532 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-58821/kubeconfig
	I1027 20:21:54.407398  102532 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-58821/.minikube
	I1027 20:21:54.533905  102532 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 20:21:54.598751  102532 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 20:21:54.606897  102532 config.go:182] Loaded profile config "auto-764820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:21:54.607084  102532 config.go:182] Loaded profile config "guest-291039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1027 20:21:54.607246  102532 config.go:182] Loaded profile config "kindnet-764820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:21:54.607384  102532 config.go:182] Loaded profile config "newest-cni-528878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:21:54.607519  102532 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 20:21:54.713529  102532 out.go:179] * Using the kvm2 driver based on user configuration
	I1027 20:21:54.808326  102532 start.go:305] selected driver: kvm2
	I1027 20:21:54.808351  102532 start.go:925] validating driver "kvm2" against <nil>
	I1027 20:21:54.808364  102532 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 20:21:54.809160  102532 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1027 20:21:54.809391  102532 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 20:21:54.809419  102532 cni.go:84] Creating CNI manager for "calico"
	I1027 20:21:54.809426  102532 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1027 20:21:54.809457  102532 start.go:349] cluster config:
	{Name:calico-764820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-764820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1027 20:21:54.809544  102532 iso.go:125] acquiring lock: {Name:mkbd04910579486806c142a651be4f82498c73ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 20:21:55.020421  102532 out.go:179] * Starting "calico-764820" primary control-plane node in "calico-764820" cluster
	I1027 20:21:55.161131  102532 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 20:21:55.161211  102532 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-58821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 20:21:55.161226  102532 cache.go:58] Caching tarball of preloaded images
	I1027 20:21:55.161328  102532 preload.go:233] Found /home/jenkins/minikube-integration/21801-58821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 20:21:55.161342  102532 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 20:21:55.161506  102532 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/calico-764820/config.json ...
	I1027 20:21:55.161535  102532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/calico-764820/config.json: {Name:mk3a2b709f17ec3e70ab5f7a21a88067d7494b34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:21:55.161722  102532 start.go:360] acquireMachinesLock for calico-764820: {Name:mk93a855054c8dcf81931234082a94fdc68a4726 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1027 20:22:08.356242  102532 start.go:364] duration metric: took 13.194468276s to acquireMachinesLock for "calico-764820"
	I1027 20:22:08.356331  102532 start.go:93] Provisioning new machine with config: &{Name:calico-764820 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:calico-764820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 20:22:08.356465  102532 start.go:125] createHost starting for "" (driver="kvm2")
	I1027 20:22:08.358879  102532 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1027 20:22:08.359140  102532 start.go:159] libmachine.API.Create for "calico-764820" (driver="kvm2")
	I1027 20:22:08.359175  102532 client.go:168] LocalClient.Create starting
	I1027 20:22:08.359275  102532 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem
	I1027 20:22:08.359328  102532 main.go:141] libmachine: Decoding PEM data...
	I1027 20:22:08.359356  102532 main.go:141] libmachine: Parsing certificate...
	I1027 20:22:08.359441  102532 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-58821/.minikube/certs/cert.pem
	I1027 20:22:08.359469  102532 main.go:141] libmachine: Decoding PEM data...
	I1027 20:22:08.359480  102532 main.go:141] libmachine: Parsing certificate...
	I1027 20:22:08.360106  102532 main.go:141] libmachine: creating domain...
	I1027 20:22:08.360122  102532 main.go:141] libmachine: creating network...
	I1027 20:22:08.362044  102532 main.go:141] libmachine: found existing default network
	I1027 20:22:08.362545  102532 main.go:141] libmachine: <network connections='3'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1027 20:22:08.363561  102532 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:57:76:17} reservation:<nil>}
	I1027 20:22:08.364477  102532 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:b6:7c:85} reservation:<nil>}
	I1027 20:22:08.365306  102532 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c5ee40}
	I1027 20:22:08.365388  102532 main.go:141] libmachine: defining private network:
	
	<network>
	  <name>mk-calico-764820</name>
	  <dns enable='no'/>
	  <ip address='192.168.61.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.61.2' end='192.168.61.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1027 20:22:08.371166  102532 main.go:141] libmachine: creating private network mk-calico-764820 192.168.61.0/24...
	I1027 20:22:08.452486  102532 main.go:141] libmachine: private network mk-calico-764820 192.168.61.0/24 created
	I1027 20:22:08.452916  102532 main.go:141] libmachine: <network>
	  <name>mk-calico-764820</name>
	  <uuid>7eba241e-9faf-4a0a-bfbe-8682e7e24fb0</uuid>
	  <bridge name='virbr3' stp='on' delay='0'/>
	  <mac address='52:54:00:31:8f:a7'/>
	  <dns enable='no'/>
	  <ip address='192.168.61.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.61.2' end='192.168.61.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1027 20:22:08.452972  102532 main.go:141] libmachine: setting up store path in /home/jenkins/minikube-integration/21801-58821/.minikube/machines/calico-764820 ...
	I1027 20:22:08.453008  102532 main.go:141] libmachine: building disk image from file:///home/jenkins/minikube-integration/21801-58821/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1027 20:22:08.453046  102532 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21801-58821/.minikube
	I1027 20:22:08.453140  102532 main.go:141] libmachine: Downloading /home/jenkins/minikube-integration/21801-58821/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21801-58821/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
	I1027 20:22:08.739702  102532 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21801-58821/.minikube/machines/calico-764820/id_rsa...
	I1027 20:22:09.114823  102532 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21801-58821/.minikube/machines/calico-764820/calico-764820.rawdisk...
	I1027 20:22:09.114867  102532 main.go:141] libmachine: Writing magic tar header
	I1027 20:22:09.114894  102532 main.go:141] libmachine: Writing SSH key tar header
	I1027 20:22:09.114965  102532 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21801-58821/.minikube/machines/calico-764820 ...
	I1027 20:22:09.115049  102532 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21801-58821/.minikube/machines/calico-764820
	I1027 20:22:09.115085  102532 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21801-58821/.minikube/machines/calico-764820 (perms=drwx------)
	I1027 20:22:09.115105  102532 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21801-58821/.minikube/machines
	I1027 20:22:09.115114  102532 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21801-58821/.minikube/machines (perms=drwxr-xr-x)
	I1027 20:22:09.115124  102532 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21801-58821/.minikube
	I1027 20:22:09.115134  102532 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21801-58821/.minikube (perms=drwxr-xr-x)
	I1027 20:22:09.115143  102532 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21801-58821
	I1027 20:22:09.115151  102532 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21801-58821 (perms=drwxrwxr-x)
	I1027 20:22:09.115162  102532 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1027 20:22:09.115170  102532 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1027 20:22:09.115178  102532 main.go:141] libmachine: checking permissions on dir: /home/jenkins
	I1027 20:22:09.115185  102532 main.go:141] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1027 20:22:09.115192  102532 main.go:141] libmachine: checking permissions on dir: /home
	I1027 20:22:09.115199  102532 main.go:141] libmachine: skipping /home - not owner
	I1027 20:22:09.115206  102532 main.go:141] libmachine: defining domain...
	I1027 20:22:09.116554  102532 main.go:141] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>calico-764820</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21801-58821/.minikube/machines/calico-764820/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21801-58821/.minikube/machines/calico-764820/calico-764820.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-calico-764820'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1027 20:22:09.121336  102532 main.go:141] libmachine: domain calico-764820 has defined MAC address 52:54:00:fd:60:bc in network default
	I1027 20:22:09.122023  102532 main.go:141] libmachine: domain calico-764820 has defined MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:09.122056  102532 main.go:141] libmachine: starting domain...
	I1027 20:22:09.122061  102532 main.go:141] libmachine: ensuring networks are active...
	I1027 20:22:09.123005  102532 main.go:141] libmachine: Ensuring network default is active
	I1027 20:22:09.123488  102532 main.go:141] libmachine: Ensuring network mk-calico-764820 is active
	I1027 20:22:09.124236  102532 main.go:141] libmachine: getting domain XML...
	I1027 20:22:09.125561  102532 main.go:141] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>calico-764820</name>
	  <uuid>28b19bc2-2a9d-4260-99ec-1fd2714fc38e</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21801-58821/.minikube/machines/calico-764820/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21801-58821/.minikube/machines/calico-764820/calico-764820.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:4a:64:3d'/>
	      <source network='mk-calico-764820'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:fd:60:bc'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1027 20:22:10.545915  102532 main.go:141] libmachine: waiting for domain to start...
	I1027 20:22:10.547820  102532 main.go:141] libmachine: domain is now running
	I1027 20:22:10.547843  102532 main.go:141] libmachine: waiting for IP...
	I1027 20:22:10.548899  102532 main.go:141] libmachine: domain calico-764820 has defined MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:10.549684  102532 main.go:141] libmachine: no network interface addresses found for domain calico-764820 (source=lease)
	I1027 20:22:10.549703  102532 main.go:141] libmachine: trying to list again with source=arp
	I1027 20:22:10.550246  102532 main.go:141] libmachine: unable to find current IP address of domain calico-764820 in network mk-calico-764820 (interfaces detected: [])
	I1027 20:22:10.550305  102532 retry.go:31] will retry after 249.17993ms: waiting for domain to come up
	I1027 20:22:10.800982  102532 main.go:141] libmachine: domain calico-764820 has defined MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:10.802092  102532 main.go:141] libmachine: no network interface addresses found for domain calico-764820 (source=lease)
	I1027 20:22:10.802114  102532 main.go:141] libmachine: trying to list again with source=arp
	I1027 20:22:10.802642  102532 main.go:141] libmachine: unable to find current IP address of domain calico-764820 in network mk-calico-764820 (interfaces detected: [])
	I1027 20:22:10.802691  102532 retry.go:31] will retry after 388.514315ms: waiting for domain to come up
	I1027 20:22:11.193314  102532 main.go:141] libmachine: domain calico-764820 has defined MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:11.194098  102532 main.go:141] libmachine: no network interface addresses found for domain calico-764820 (source=lease)
	I1027 20:22:11.194115  102532 main.go:141] libmachine: trying to list again with source=arp
	I1027 20:22:11.194536  102532 main.go:141] libmachine: unable to find current IP address of domain calico-764820 in network mk-calico-764820 (interfaces detected: [])
	I1027 20:22:11.194579  102532 retry.go:31] will retry after 339.452011ms: waiting for domain to come up
	I1027 20:22:11.536259  102532 main.go:141] libmachine: domain calico-764820 has defined MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:11.537089  102532 main.go:141] libmachine: no network interface addresses found for domain calico-764820 (source=lease)
	I1027 20:22:11.537110  102532 main.go:141] libmachine: trying to list again with source=arp
	I1027 20:22:11.537541  102532 main.go:141] libmachine: unable to find current IP address of domain calico-764820 in network mk-calico-764820 (interfaces detected: [])
	I1027 20:22:11.537588  102532 retry.go:31] will retry after 422.383329ms: waiting for domain to come up
	I1027 20:22:11.961402  102532 main.go:141] libmachine: domain calico-764820 has defined MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:11.962363  102532 main.go:141] libmachine: no network interface addresses found for domain calico-764820 (source=lease)
	I1027 20:22:11.962389  102532 main.go:141] libmachine: trying to list again with source=arp
	I1027 20:22:11.962941  102532 main.go:141] libmachine: unable to find current IP address of domain calico-764820 in network mk-calico-764820 (interfaces detected: [])
	I1027 20:22:11.962994  102532 retry.go:31] will retry after 730.756625ms: waiting for domain to come up
	I1027 20:22:12.695163  102532 main.go:141] libmachine: domain calico-764820 has defined MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:12.696091  102532 main.go:141] libmachine: no network interface addresses found for domain calico-764820 (source=lease)
	I1027 20:22:12.696114  102532 main.go:141] libmachine: trying to list again with source=arp
	I1027 20:22:12.696653  102532 main.go:141] libmachine: unable to find current IP address of domain calico-764820 in network mk-calico-764820 (interfaces detected: [])
	I1027 20:22:12.696701  102532 retry.go:31] will retry after 827.158524ms: waiting for domain to come up
	I1027 20:22:13.525961  102532 main.go:141] libmachine: domain calico-764820 has defined MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:13.526858  102532 main.go:141] libmachine: no network interface addresses found for domain calico-764820 (source=lease)
	I1027 20:22:13.526887  102532 main.go:141] libmachine: trying to list again with source=arp
	I1027 20:22:13.527391  102532 main.go:141] libmachine: unable to find current IP address of domain calico-764820 in network mk-calico-764820 (interfaces detected: [])
	I1027 20:22:13.527438  102532 retry.go:31] will retry after 850.210693ms: waiting for domain to come up
	I1027 20:22:14.380068  102532 main.go:141] libmachine: domain calico-764820 has defined MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:14.380868  102532 main.go:141] libmachine: no network interface addresses found for domain calico-764820 (source=lease)
	I1027 20:22:14.380893  102532 main.go:141] libmachine: trying to list again with source=arp
	I1027 20:22:14.381384  102532 main.go:141] libmachine: unable to find current IP address of domain calico-764820 in network mk-calico-764820 (interfaces detected: [])
	I1027 20:22:14.381433  102532 retry.go:31] will retry after 1.027579559s: waiting for domain to come up
	I1027 20:22:15.410338  102532 main.go:141] libmachine: domain calico-764820 has defined MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:15.411294  102532 main.go:141] libmachine: no network interface addresses found for domain calico-764820 (source=lease)
	I1027 20:22:15.411315  102532 main.go:141] libmachine: trying to list again with source=arp
	I1027 20:22:15.411746  102532 main.go:141] libmachine: unable to find current IP address of domain calico-764820 in network mk-calico-764820 (interfaces detected: [])
	I1027 20:22:15.411779  102532 retry.go:31] will retry after 1.711070442s: waiting for domain to come up
	I1027 20:22:17.125679  102532 main.go:141] libmachine: domain calico-764820 has defined MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:17.126474  102532 main.go:141] libmachine: no network interface addresses found for domain calico-764820 (source=lease)
	I1027 20:22:17.126491  102532 main.go:141] libmachine: trying to list again with source=arp
	I1027 20:22:17.126901  102532 main.go:141] libmachine: unable to find current IP address of domain calico-764820 in network mk-calico-764820 (interfaces detected: [])
	I1027 20:22:17.126944  102532 retry.go:31] will retry after 1.723560459s: waiting for domain to come up
	I1027 20:22:18.852813  102532 main.go:141] libmachine: domain calico-764820 has defined MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:18.853549  102532 main.go:141] libmachine: no network interface addresses found for domain calico-764820 (source=lease)
	I1027 20:22:18.853567  102532 main.go:141] libmachine: trying to list again with source=arp
	I1027 20:22:18.854029  102532 main.go:141] libmachine: unable to find current IP address of domain calico-764820 in network mk-calico-764820 (interfaces detected: [])
	I1027 20:22:18.854096  102532 retry.go:31] will retry after 1.913547497s: waiting for domain to come up
	I1027 20:22:20.769402  102532 main.go:141] libmachine: domain calico-764820 has defined MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:20.770205  102532 main.go:141] libmachine: no network interface addresses found for domain calico-764820 (source=lease)
	I1027 20:22:20.770225  102532 main.go:141] libmachine: trying to list again with source=arp
	I1027 20:22:20.770721  102532 main.go:141] libmachine: unable to find current IP address of domain calico-764820 in network mk-calico-764820 (interfaces detected: [])
	I1027 20:22:20.770784  102532 retry.go:31] will retry after 2.839533116s: waiting for domain to come up
	I1027 20:22:23.613349  102532 main.go:141] libmachine: domain calico-764820 has defined MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:23.614065  102532 main.go:141] libmachine: no network interface addresses found for domain calico-764820 (source=lease)
	I1027 20:22:23.614104  102532 main.go:141] libmachine: trying to list again with source=arp
	I1027 20:22:23.614532  102532 main.go:141] libmachine: unable to find current IP address of domain calico-764820 in network mk-calico-764820 (interfaces detected: [])
	I1027 20:22:23.614575  102532 retry.go:31] will retry after 3.67228362s: waiting for domain to come up
	I1027 20:22:27.291192  102532 main.go:141] libmachine: domain calico-764820 has defined MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:27.291907  102532 main.go:141] libmachine: domain calico-764820 has current primary IP address 192.168.61.159 and MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:27.291935  102532 main.go:141] libmachine: found domain IP: 192.168.61.159
	I1027 20:22:27.291948  102532 main.go:141] libmachine: reserving static IP address...
	I1027 20:22:27.292504  102532 main.go:141] libmachine: unable to find host DHCP lease matching {name: "calico-764820", mac: "52:54:00:4a:64:3d", ip: "192.168.61.159"} in network mk-calico-764820
	I1027 20:22:27.559494  102532 main.go:141] libmachine: reserved static IP address 192.168.61.159 for domain calico-764820
	I1027 20:22:27.559519  102532 main.go:141] libmachine: waiting for SSH...
	I1027 20:22:27.559526  102532 main.go:141] libmachine: Getting to WaitForSSH function...
	I1027 20:22:27.563289  102532 main.go:141] libmachine: domain calico-764820 has defined MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:27.563811  102532 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:3d", ip: ""} in network mk-calico-764820: {Iface:virbr3 ExpiryTime:2025-10-27 21:22:25 +0000 UTC Type:0 Mac:52:54:00:4a:64:3d Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4a:64:3d}
	I1027 20:22:27.563843  102532 main.go:141] libmachine: domain calico-764820 has defined IP address 192.168.61.159 and MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:27.564084  102532 main.go:141] libmachine: Using SSH client type: native
	I1027 20:22:27.564390  102532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.159 22 <nil> <nil>}
	I1027 20:22:27.564405  102532 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1027 20:22:27.678198  102532 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 20:22:27.678625  102532 main.go:141] libmachine: domain creation complete
	I1027 20:22:27.680376  102532 machine.go:93] provisionDockerMachine start ...
	I1027 20:22:27.682894  102532 main.go:141] libmachine: domain calico-764820 has defined MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:27.683308  102532 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:3d", ip: ""} in network mk-calico-764820: {Iface:virbr3 ExpiryTime:2025-10-27 21:22:25 +0000 UTC Type:0 Mac:52:54:00:4a:64:3d Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:calico-764820 Clientid:01:52:54:00:4a:64:3d}
	I1027 20:22:27.683338  102532 main.go:141] libmachine: domain calico-764820 has defined IP address 192.168.61.159 and MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:27.683519  102532 main.go:141] libmachine: Using SSH client type: native
	I1027 20:22:27.683715  102532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.159 22 <nil> <nil>}
	I1027 20:22:27.683727  102532 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 20:22:27.793241  102532 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1027 20:22:27.793278  102532 buildroot.go:166] provisioning hostname "calico-764820"
	I1027 20:22:27.796946  102532 main.go:141] libmachine: domain calico-764820 has defined MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:27.797526  102532 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:3d", ip: ""} in network mk-calico-764820: {Iface:virbr3 ExpiryTime:2025-10-27 21:22:25 +0000 UTC Type:0 Mac:52:54:00:4a:64:3d Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:calico-764820 Clientid:01:52:54:00:4a:64:3d}
	I1027 20:22:27.797563  102532 main.go:141] libmachine: domain calico-764820 has defined IP address 192.168.61.159 and MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:27.797760  102532 main.go:141] libmachine: Using SSH client type: native
	I1027 20:22:27.798065  102532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.159 22 <nil> <nil>}
	I1027 20:22:27.798084  102532 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-764820 && echo "calico-764820" | sudo tee /etc/hostname
	I1027 20:22:27.939371  102532 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-764820
	
	I1027 20:22:27.942640  102532 main.go:141] libmachine: domain calico-764820 has defined MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:27.943190  102532 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:3d", ip: ""} in network mk-calico-764820: {Iface:virbr3 ExpiryTime:2025-10-27 21:22:25 +0000 UTC Type:0 Mac:52:54:00:4a:64:3d Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:calico-764820 Clientid:01:52:54:00:4a:64:3d}
	I1027 20:22:27.943224  102532 main.go:141] libmachine: domain calico-764820 has defined IP address 192.168.61.159 and MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:27.943454  102532 main.go:141] libmachine: Using SSH client type: native
	I1027 20:22:27.943686  102532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.159 22 <nil> <nil>}
	I1027 20:22:27.943709  102532 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-764820' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-764820/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-764820' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 20:22:28.072602  102532 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 20:22:28.072643  102532 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21801-58821/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-58821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-58821/.minikube}
	I1027 20:22:28.072677  102532 buildroot.go:174] setting up certificates
	I1027 20:22:28.072692  102532 provision.go:84] configureAuth start
	I1027 20:22:28.076544  102532 main.go:141] libmachine: domain calico-764820 has defined MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:28.077133  102532 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:3d", ip: ""} in network mk-calico-764820: {Iface:virbr3 ExpiryTime:2025-10-27 21:22:25 +0000 UTC Type:0 Mac:52:54:00:4a:64:3d Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:calico-764820 Clientid:01:52:54:00:4a:64:3d}
	I1027 20:22:28.077168  102532 main.go:141] libmachine: domain calico-764820 has defined IP address 192.168.61.159 and MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:28.079666  102532 main.go:141] libmachine: domain calico-764820 has defined MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:28.079972  102532 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:3d", ip: ""} in network mk-calico-764820: {Iface:virbr3 ExpiryTime:2025-10-27 21:22:25 +0000 UTC Type:0 Mac:52:54:00:4a:64:3d Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:calico-764820 Clientid:01:52:54:00:4a:64:3d}
	I1027 20:22:28.080001  102532 main.go:141] libmachine: domain calico-764820 has defined IP address 192.168.61.159 and MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:28.080179  102532 provision.go:143] copyHostCerts
	I1027 20:22:28.080240  102532 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-58821/.minikube/ca.pem, removing ...
	I1027 20:22:28.080259  102532 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-58821/.minikube/ca.pem
	I1027 20:22:28.080329  102532 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-58821/.minikube/ca.pem (1078 bytes)
	I1027 20:22:28.080428  102532 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-58821/.minikube/cert.pem, removing ...
	I1027 20:22:28.080440  102532 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-58821/.minikube/cert.pem
	I1027 20:22:28.080476  102532 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-58821/.minikube/cert.pem (1123 bytes)
	I1027 20:22:28.080535  102532 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-58821/.minikube/key.pem, removing ...
	I1027 20:22:28.080542  102532 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-58821/.minikube/key.pem
	I1027 20:22:28.080566  102532 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-58821/.minikube/key.pem (1675 bytes)
	I1027 20:22:28.080621  102532 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-58821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca-key.pem org=jenkins.calico-764820 san=[127.0.0.1 192.168.61.159 calico-764820 localhost minikube]
	I1027 20:22:28.182107  102532 provision.go:177] copyRemoteCerts
	I1027 20:22:28.182169  102532 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 20:22:28.185134  102532 main.go:141] libmachine: domain calico-764820 has defined MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:28.185590  102532 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:3d", ip: ""} in network mk-calico-764820: {Iface:virbr3 ExpiryTime:2025-10-27 21:22:25 +0000 UTC Type:0 Mac:52:54:00:4a:64:3d Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:calico-764820 Clientid:01:52:54:00:4a:64:3d}
	I1027 20:22:28.185627  102532 main.go:141] libmachine: domain calico-764820 has defined IP address 192.168.61.159 and MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:28.185813  102532 sshutil.go:53] new ssh client: &{IP:192.168.61.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/calico-764820/id_rsa Username:docker}
	I1027 20:22:28.280316  102532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1027 20:22:28.315917  102532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 20:22:28.356076  102532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 20:22:28.393456  102532 provision.go:87] duration metric: took 320.747893ms to configureAuth
	I1027 20:22:28.393491  102532 buildroot.go:189] setting minikube options for container-runtime
	I1027 20:22:28.393724  102532 config.go:182] Loaded profile config "calico-764820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:22:28.396765  102532 main.go:141] libmachine: domain calico-764820 has defined MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:28.397225  102532 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:3d", ip: ""} in network mk-calico-764820: {Iface:virbr3 ExpiryTime:2025-10-27 21:22:25 +0000 UTC Type:0 Mac:52:54:00:4a:64:3d Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:calico-764820 Clientid:01:52:54:00:4a:64:3d}
	I1027 20:22:28.397257  102532 main.go:141] libmachine: domain calico-764820 has defined IP address 192.168.61.159 and MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:28.397451  102532 main.go:141] libmachine: Using SSH client type: native
	I1027 20:22:28.397732  102532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.159 22 <nil> <nil>}
	I1027 20:22:28.397759  102532 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 20:22:28.682217  102532 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 20:22:28.682252  102532 machine.go:96] duration metric: took 1.00185558s to provisionDockerMachine
	I1027 20:22:28.682266  102532 client.go:171] duration metric: took 20.323079856s to LocalClient.Create
	I1027 20:22:28.682297  102532 start.go:167] duration metric: took 20.323163304s to libmachine.API.Create "calico-764820"
	I1027 20:22:28.682306  102532 start.go:293] postStartSetup for "calico-764820" (driver="kvm2")
	I1027 20:22:28.682319  102532 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 20:22:28.682386  102532 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 20:22:28.685621  102532 main.go:141] libmachine: domain calico-764820 has defined MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:28.686013  102532 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:3d", ip: ""} in network mk-calico-764820: {Iface:virbr3 ExpiryTime:2025-10-27 21:22:25 +0000 UTC Type:0 Mac:52:54:00:4a:64:3d Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:calico-764820 Clientid:01:52:54:00:4a:64:3d}
	I1027 20:22:28.686050  102532 main.go:141] libmachine: domain calico-764820 has defined IP address 192.168.61.159 and MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:28.686304  102532 sshutil.go:53] new ssh client: &{IP:192.168.61.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/calico-764820/id_rsa Username:docker}
	I1027 20:22:28.778532  102532 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 20:22:28.784432  102532 info.go:137] Remote host: Buildroot 2025.02
	I1027 20:22:28.784461  102532 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-58821/.minikube/addons for local assets ...
	I1027 20:22:28.784534  102532 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-58821/.minikube/files for local assets ...
	I1027 20:22:28.784648  102532 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-58821/.minikube/files/etc/ssl/certs/627052.pem -> 627052.pem in /etc/ssl/certs
	I1027 20:22:28.784815  102532 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 20:22:28.798800  102532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/files/etc/ssl/certs/627052.pem --> /etc/ssl/certs/627052.pem (1708 bytes)
	I1027 20:22:28.832307  102532 start.go:296] duration metric: took 149.981936ms for postStartSetup
	I1027 20:22:28.835539  102532 main.go:141] libmachine: domain calico-764820 has defined MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:28.835986  102532 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:3d", ip: ""} in network mk-calico-764820: {Iface:virbr3 ExpiryTime:2025-10-27 21:22:25 +0000 UTC Type:0 Mac:52:54:00:4a:64:3d Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:calico-764820 Clientid:01:52:54:00:4a:64:3d}
	I1027 20:22:28.836009  102532 main.go:141] libmachine: domain calico-764820 has defined IP address 192.168.61.159 and MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:28.836343  102532 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/calico-764820/config.json ...
	I1027 20:22:28.836593  102532 start.go:128] duration metric: took 20.480113448s to createHost
	I1027 20:22:28.838687  102532 main.go:141] libmachine: domain calico-764820 has defined MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:28.839081  102532 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:3d", ip: ""} in network mk-calico-764820: {Iface:virbr3 ExpiryTime:2025-10-27 21:22:25 +0000 UTC Type:0 Mac:52:54:00:4a:64:3d Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:calico-764820 Clientid:01:52:54:00:4a:64:3d}
	I1027 20:22:28.839115  102532 main.go:141] libmachine: domain calico-764820 has defined IP address 192.168.61.159 and MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:28.839325  102532 main.go:141] libmachine: Using SSH client type: native
	I1027 20:22:28.839604  102532 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.159 22 <nil> <nil>}
	I1027 20:22:28.839619  102532 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1027 20:22:28.958020  102532 main.go:141] libmachine: SSH cmd err, output: <nil>: 1761596548.912045483
	
	I1027 20:22:28.958078  102532 fix.go:216] guest clock: 1761596548.912045483
	I1027 20:22:28.958090  102532 fix.go:229] Guest: 2025-10-27 20:22:28.912045483 +0000 UTC Remote: 2025-10-27 20:22:28.836608134 +0000 UTC m=+34.813859513 (delta=75.437349ms)
	I1027 20:22:28.958122  102532 fix.go:200] guest clock delta is within tolerance: 75.437349ms
	I1027 20:22:28.958130  102532 start.go:83] releasing machines lock for "calico-764820", held for 20.60185395s
	I1027 20:22:28.961847  102532 main.go:141] libmachine: domain calico-764820 has defined MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:28.962348  102532 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:3d", ip: ""} in network mk-calico-764820: {Iface:virbr3 ExpiryTime:2025-10-27 21:22:25 +0000 UTC Type:0 Mac:52:54:00:4a:64:3d Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:calico-764820 Clientid:01:52:54:00:4a:64:3d}
	I1027 20:22:28.962385  102532 main.go:141] libmachine: domain calico-764820 has defined IP address 192.168.61.159 and MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:28.963023  102532 ssh_runner.go:195] Run: cat /version.json
	I1027 20:22:28.963116  102532 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 20:22:28.967019  102532 main.go:141] libmachine: domain calico-764820 has defined MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:28.967345  102532 main.go:141] libmachine: domain calico-764820 has defined MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:28.967540  102532 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:3d", ip: ""} in network mk-calico-764820: {Iface:virbr3 ExpiryTime:2025-10-27 21:22:25 +0000 UTC Type:0 Mac:52:54:00:4a:64:3d Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:calico-764820 Clientid:01:52:54:00:4a:64:3d}
	I1027 20:22:28.967579  102532 main.go:141] libmachine: domain calico-764820 has defined IP address 192.168.61.159 and MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:28.967780  102532 sshutil.go:53] new ssh client: &{IP:192.168.61.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/calico-764820/id_rsa Username:docker}
	I1027 20:22:28.967832  102532 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:3d", ip: ""} in network mk-calico-764820: {Iface:virbr3 ExpiryTime:2025-10-27 21:22:25 +0000 UTC Type:0 Mac:52:54:00:4a:64:3d Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:calico-764820 Clientid:01:52:54:00:4a:64:3d}
	I1027 20:22:28.967866  102532 main.go:141] libmachine: domain calico-764820 has defined IP address 192.168.61.159 and MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:28.968068  102532 sshutil.go:53] new ssh client: &{IP:192.168.61.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/calico-764820/id_rsa Username:docker}
	I1027 20:22:29.054515  102532 ssh_runner.go:195] Run: systemctl --version
	I1027 20:22:29.085029  102532 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 20:22:29.256052  102532 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 20:22:29.266621  102532 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 20:22:29.266715  102532 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 20:22:29.297743  102532 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1027 20:22:29.297772  102532 start.go:495] detecting cgroup driver to use...
	I1027 20:22:29.297847  102532 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 20:22:29.325065  102532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 20:22:29.346924  102532 docker.go:218] disabling cri-docker service (if available) ...
	I1027 20:22:29.346998  102532 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 20:22:29.369883  102532 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 20:22:29.392856  102532 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 20:22:29.571241  102532 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 20:22:29.790297  102532 docker.go:234] disabling docker service ...
	I1027 20:22:29.790375  102532 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 20:22:29.811717  102532 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 20:22:29.829911  102532 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 20:22:30.033157  102532 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 20:22:30.193153  102532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 20:22:30.212013  102532 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 20:22:30.238466  102532 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 20:22:30.238539  102532 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:22:30.252077  102532 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 20:22:30.252150  102532 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:22:30.265071  102532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:22:30.281071  102532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:22:30.299339  102532 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 20:22:30.318664  102532 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:22:30.334833  102532 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:22:30.363150  102532 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:22:30.379616  102532 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 20:22:30.395165  102532 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1027 20:22:30.395231  102532 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1027 20:22:30.428853  102532 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 20:22:30.449660  102532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:22:30.625273  102532 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 20:22:30.772606  102532 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 20:22:30.772695  102532 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 20:22:30.779608  102532 start.go:563] Will wait 60s for crictl version
	I1027 20:22:30.779698  102532 ssh_runner.go:195] Run: which crictl
	I1027 20:22:30.784592  102532 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1027 20:22:30.836487  102532 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1027 20:22:30.836591  102532 ssh_runner.go:195] Run: crio --version
	I1027 20:22:30.871511  102532 ssh_runner.go:195] Run: crio --version
	I1027 20:22:30.912488  102532 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1027 20:22:30.917114  102532 main.go:141] libmachine: domain calico-764820 has defined MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:30.917654  102532 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:3d", ip: ""} in network mk-calico-764820: {Iface:virbr3 ExpiryTime:2025-10-27 21:22:25 +0000 UTC Type:0 Mac:52:54:00:4a:64:3d Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:calico-764820 Clientid:01:52:54:00:4a:64:3d}
	I1027 20:22:30.917685  102532 main.go:141] libmachine: domain calico-764820 has defined IP address 192.168.61.159 and MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:30.917873  102532 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1027 20:22:30.923147  102532 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 20:22:30.942022  102532 kubeadm.go:883] updating cluster {Name:calico-764820 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:calico-764820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.61.159 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 20:22:30.942192  102532 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 20:22:30.942270  102532 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 20:22:30.990488  102532 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1027 20:22:30.990577  102532 ssh_runner.go:195] Run: which lz4
	I1027 20:22:30.996784  102532 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1027 20:22:31.004312  102532 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1027 20:22:31.004346  102532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1027 20:22:32.870689  102532 crio.go:462] duration metric: took 1.873940567s to copy over tarball
	I1027 20:22:32.870798  102532 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1027 20:22:34.891240  102532 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.020403837s)
	I1027 20:22:34.891278  102532 crio.go:469] duration metric: took 2.02054321s to extract the tarball
	I1027 20:22:34.891288  102532 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1027 20:22:34.959837  102532 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 20:22:35.014907  102532 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 20:22:35.014943  102532 cache_images.go:85] Images are preloaded, skipping loading
	I1027 20:22:35.014956  102532 kubeadm.go:934] updating node { 192.168.61.159 8443 v1.34.1 crio true true} ...
	I1027 20:22:35.015129  102532 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-764820 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:calico-764820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1027 20:22:35.015241  102532 ssh_runner.go:195] Run: crio config
	I1027 20:22:35.070248  102532 cni.go:84] Creating CNI manager for "calico"
	I1027 20:22:35.070281  102532 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 20:22:35.070306  102532 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.159 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-764820 NodeName:calico-764820 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 20:22:35.070465  102532 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-764820"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.159"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.159"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 20:22:35.070560  102532 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 20:22:35.088190  102532 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 20:22:35.088274  102532 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 20:22:35.102996  102532 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1027 20:22:35.132248  102532 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 20:22:35.160706  102532 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1027 20:22:35.192390  102532 ssh_runner.go:195] Run: grep 192.168.61.159	control-plane.minikube.internal$ /etc/hosts
	I1027 20:22:35.197561  102532 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.159	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 20:22:35.216396  102532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:22:35.377241  102532 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 20:22:35.416307  102532 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/calico-764820 for IP: 192.168.61.159
	I1027 20:22:35.416338  102532 certs.go:195] generating shared ca certs ...
	I1027 20:22:35.416362  102532 certs.go:227] acquiring lock for ca certs: {Name:mk3c1c890b4611f9f1a3f97b9046837227a16799 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:22:35.416537  102532 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-58821/.minikube/ca.key
	I1027 20:22:35.416616  102532 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.key
	I1027 20:22:35.416636  102532 certs.go:257] generating profile certs ...
	I1027 20:22:35.416725  102532 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/calico-764820/client.key
	I1027 20:22:35.416745  102532 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/calico-764820/client.crt with IP's: []
	I1027 20:22:35.481967  102532 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/calico-764820/client.crt ...
	I1027 20:22:35.482000  102532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/calico-764820/client.crt: {Name:mk839224c383aa185b4d38988ed169f1e2a7eb40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:22:35.482232  102532 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/calico-764820/client.key ...
	I1027 20:22:35.482255  102532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/calico-764820/client.key: {Name:mk976b1cf3574ceb66c2210198a48df823bc3dc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:22:35.483112  102532 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/calico-764820/apiserver.key.8f26ec61
	I1027 20:22:35.483148  102532 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/calico-764820/apiserver.crt.8f26ec61 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.159]
	I1027 20:22:35.717978  102532 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/calico-764820/apiserver.crt.8f26ec61 ...
	I1027 20:22:35.718022  102532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/calico-764820/apiserver.crt.8f26ec61: {Name:mkb8d84a8cca770d0ce13a53d6d7b801526743e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:22:35.718256  102532 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/calico-764820/apiserver.key.8f26ec61 ...
	I1027 20:22:35.718273  102532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/calico-764820/apiserver.key.8f26ec61: {Name:mk17073ac3c295b5e587bb9342ed29accc7a3468 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:22:35.718357  102532 certs.go:382] copying /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/calico-764820/apiserver.crt.8f26ec61 -> /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/calico-764820/apiserver.crt
	I1027 20:22:35.718428  102532 certs.go:386] copying /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/calico-764820/apiserver.key.8f26ec61 -> /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/calico-764820/apiserver.key
	I1027 20:22:35.718480  102532 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/calico-764820/proxy-client.key
	I1027 20:22:35.718495  102532 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/calico-764820/proxy-client.crt with IP's: []
	I1027 20:22:36.033290  102532 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/calico-764820/proxy-client.crt ...
	I1027 20:22:36.033327  102532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/calico-764820/proxy-client.crt: {Name:mk11902ac55346ce7b55a5c0a0430aa05de58b3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:22:36.033532  102532 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/calico-764820/proxy-client.key ...
	I1027 20:22:36.033549  102532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/calico-764820/proxy-client.key: {Name:mk3a3a9b53f62d5dafba58d0e71b06325f3698a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:22:36.033756  102532 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/62705.pem (1338 bytes)
	W1027 20:22:36.033807  102532 certs.go:480] ignoring /home/jenkins/minikube-integration/21801-58821/.minikube/certs/62705_empty.pem, impossibly tiny 0 bytes
	I1027 20:22:36.033823  102532 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca-key.pem (1679 bytes)
	I1027 20:22:36.033857  102532 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/ca.pem (1078 bytes)
	I1027 20:22:36.033904  102532 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/cert.pem (1123 bytes)
	I1027 20:22:36.033941  102532 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/certs/key.pem (1675 bytes)
	I1027 20:22:36.033996  102532 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-58821/.minikube/files/etc/ssl/certs/627052.pem (1708 bytes)
	I1027 20:22:36.034599  102532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 20:22:36.078103  102532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 20:22:36.121109  102532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 20:22:36.154487  102532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1027 20:22:36.191204  102532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/calico-764820/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1027 20:22:36.229480  102532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/calico-764820/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 20:22:36.270280  102532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/calico-764820/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 20:22:36.311734  102532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/calico-764820/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 20:22:36.407074  102532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 20:22:36.441784  102532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/certs/62705.pem --> /usr/share/ca-certificates/62705.pem (1338 bytes)
	I1027 20:22:36.476128  102532 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-58821/.minikube/files/etc/ssl/certs/627052.pem --> /usr/share/ca-certificates/627052.pem (1708 bytes)
	I1027 20:22:36.513371  102532 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 20:22:36.536343  102532 ssh_runner.go:195] Run: openssl version
	I1027 20:22:36.543692  102532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/62705.pem && ln -fs /usr/share/ca-certificates/62705.pem /etc/ssl/certs/62705.pem"
	I1027 20:22:36.558671  102532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/62705.pem
	I1027 20:22:36.564949  102532 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:09 /usr/share/ca-certificates/62705.pem
	I1027 20:22:36.565012  102532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/62705.pem
	I1027 20:22:36.573652  102532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/62705.pem /etc/ssl/certs/51391683.0"
	I1027 20:22:36.589242  102532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/627052.pem && ln -fs /usr/share/ca-certificates/627052.pem /etc/ssl/certs/627052.pem"
	I1027 20:22:36.606793  102532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/627052.pem
	I1027 20:22:36.613649  102532 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:09 /usr/share/ca-certificates/627052.pem
	I1027 20:22:36.613721  102532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/627052.pem
	I1027 20:22:36.622485  102532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/627052.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 20:22:36.637898  102532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 20:22:36.652376  102532 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:22:36.658942  102532 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:22:36.659020  102532 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:22:36.670533  102532 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 20:22:36.685544  102532 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 20:22:36.691101  102532 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 20:22:36.691173  102532 kubeadm.go:400] StartCluster: {Name:calico-764820 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:calico-764820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.61.159 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:22:36.691312  102532 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 20:22:36.691384  102532 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 20:22:36.733849  102532 cri.go:89] found id: ""
	I1027 20:22:36.733920  102532 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 20:22:36.748107  102532 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 20:22:36.762278  102532 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 20:22:36.778885  102532 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 20:22:36.778912  102532 kubeadm.go:157] found existing configuration files:
	
	I1027 20:22:36.778967  102532 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 20:22:36.795080  102532 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 20:22:36.795167  102532 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 20:22:36.811696  102532 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 20:22:36.824325  102532 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 20:22:36.824396  102532 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 20:22:36.839118  102532 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 20:22:36.852326  102532 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 20:22:36.852408  102532 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 20:22:36.866979  102532 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 20:22:36.880354  102532 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 20:22:36.880427  102532 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 20:22:36.893775  102532 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1027 20:22:36.959259  102532 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1027 20:22:36.959351  102532 kubeadm.go:318] [preflight] Running pre-flight checks
	I1027 20:22:37.092019  102532 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 20:22:37.092218  102532 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 20:22:37.092349  102532 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 20:22:37.110699  102532 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 20:22:37.158097  102532 out.go:252]   - Generating certificates and keys ...
	I1027 20:22:37.158216  102532 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1027 20:22:37.158312  102532 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1027 20:22:37.424830  102532 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 20:22:37.523971  102532 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1027 20:22:37.763831  102532 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1027 20:22:37.874349  102532 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1027 20:22:38.279114  102532 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1027 20:22:38.279285  102532 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [calico-764820 localhost] and IPs [192.168.61.159 127.0.0.1 ::1]
	I1027 20:22:38.464968  102532 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1027 20:22:38.465166  102532 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [calico-764820 localhost] and IPs [192.168.61.159 127.0.0.1 ::1]
	I1027 20:22:38.889474  102532 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 20:22:38.959933  102532 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 20:22:39.163734  102532 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1027 20:22:39.163860  102532 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 20:22:39.714912  102532 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 20:22:40.231695  102532 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 20:22:40.380138  102532 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 20:22:41.060378  102532 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 20:22:41.304102  102532 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 20:22:41.304942  102532 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 20:22:41.308556  102532 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 20:22:41.310548  102532 out.go:252]   - Booting up control plane ...
	I1027 20:22:41.310701  102532 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 20:22:41.310814  102532 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 20:22:41.311711  102532 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 20:22:41.353545  102532 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 20:22:41.354342  102532 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 20:22:41.365832  102532 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 20:22:41.366446  102532 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 20:22:41.366550  102532 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1027 20:22:41.567935  102532 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 20:22:41.568138  102532 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 20:22:42.567347  102532 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001627888s
	I1027 20:22:42.571009  102532 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 20:22:42.571182  102532 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.61.159:8443/livez
	I1027 20:22:42.571313  102532 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 20:22:42.571941  102532 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 20:22:45.289723  102532 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.719862228s
	I1027 20:22:46.874482  102532 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.305648132s
	I1027 20:22:49.070687  102532 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.502590166s
	I1027 20:22:49.084606  102532 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 20:22:49.104620  102532 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 20:22:49.122415  102532 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 20:22:49.122609  102532 kubeadm.go:318] [mark-control-plane] Marking the node calico-764820 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 20:22:49.138199  102532 kubeadm.go:318] [bootstrap-token] Using token: jojv30.uouc75tug86x4d7b
	I1027 20:22:49.139635  102532 out.go:252]   - Configuring RBAC rules ...
	I1027 20:22:49.139800  102532 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 20:22:49.150615  102532 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 20:22:49.165666  102532 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 20:22:49.172556  102532 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 20:22:49.176531  102532 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 20:22:49.180319  102532 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 20:22:49.478904  102532 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 20:22:49.953003  102532 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1027 20:22:50.478608  102532 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1027 20:22:50.480522  102532 kubeadm.go:318] 
	I1027 20:22:50.480592  102532 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1027 20:22:50.480605  102532 kubeadm.go:318] 
	I1027 20:22:50.480687  102532 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1027 20:22:50.480698  102532 kubeadm.go:318] 
	I1027 20:22:50.480745  102532 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1027 20:22:50.480849  102532 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 20:22:50.480929  102532 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 20:22:50.480946  102532 kubeadm.go:318] 
	I1027 20:22:50.481083  102532 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1027 20:22:50.481095  102532 kubeadm.go:318] 
	I1027 20:22:50.481163  102532 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 20:22:50.481173  102532 kubeadm.go:318] 
	I1027 20:22:50.481275  102532 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1027 20:22:50.481405  102532 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 20:22:50.481502  102532 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 20:22:50.481512  102532 kubeadm.go:318] 
	I1027 20:22:50.481638  102532 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 20:22:50.481745  102532 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1027 20:22:50.481755  102532 kubeadm.go:318] 
	I1027 20:22:50.481868  102532 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token jojv30.uouc75tug86x4d7b \
	I1027 20:22:50.482109  102532 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ab9d04ec7d88165f854ca6007f0db50cb21d439f87063d47c1cf645e122a460a \
	I1027 20:22:50.482161  102532 kubeadm.go:318] 	--control-plane 
	I1027 20:22:50.482232  102532 kubeadm.go:318] 
	I1027 20:22:50.482394  102532 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1027 20:22:50.482406  102532 kubeadm.go:318] 
	I1027 20:22:50.482519  102532 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token jojv30.uouc75tug86x4d7b \
	I1027 20:22:50.482655  102532 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ab9d04ec7d88165f854ca6007f0db50cb21d439f87063d47c1cf645e122a460a 
	I1027 20:22:50.483809  102532 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 20:22:50.483879  102532 cni.go:84] Creating CNI manager for "calico"
	I1027 20:22:50.485589  102532 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1027 20:22:50.487701  102532 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 20:22:50.487729  102532 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (539470 bytes)
	I1027 20:22:50.527150  102532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 20:22:52.594466  102532 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (2.067269361s)
	I1027 20:22:52.594531  102532 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 20:22:52.594637  102532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:22:52.594657  102532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-764820 minikube.k8s.io/updated_at=2025_10_27T20_22_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f minikube.k8s.io/name=calico-764820 minikube.k8s.io/primary=true
	I1027 20:22:52.636057  102532 ops.go:34] apiserver oom_adj: -16
	I1027 20:22:52.726460  102532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:22:53.227137  102532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:22:53.727524  102532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:22:54.227355  102532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:22:54.407226  102532 kubeadm.go:1113] duration metric: took 1.812673411s to wait for elevateKubeSystemPrivileges
	I1027 20:22:54.407254  102532 kubeadm.go:402] duration metric: took 17.71609082s to StartCluster
	I1027 20:22:54.407272  102532 settings.go:142] acquiring lock: {Name:mk19a39086427cb47b9bb78fd0b5176c91a751d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:22:54.407340  102532 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-58821/kubeconfig
	I1027 20:22:54.408990  102532 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-58821/kubeconfig: {Name:mk90c4d883178b7191d62a8cd99434bc24dd555f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:22:54.409270  102532 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 20:22:54.409274  102532 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.159 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 20:22:54.409304  102532 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 20:22:54.409380  102532 addons.go:69] Setting storage-provisioner=true in profile "calico-764820"
	I1027 20:22:54.409483  102532 config.go:182] Loaded profile config "calico-764820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:22:54.409396  102532 addons.go:238] Setting addon storage-provisioner=true in "calico-764820"
	I1027 20:22:54.409555  102532 host.go:66] Checking if "calico-764820" exists ...
	I1027 20:22:54.409403  102532 addons.go:69] Setting default-storageclass=true in profile "calico-764820"
	I1027 20:22:54.409650  102532 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-764820"
	I1027 20:22:54.413027  102532 out.go:179] * Verifying Kubernetes components...
	I1027 20:22:54.414479  102532 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:22:54.414543  102532 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 20:22:54.414880  102532 addons.go:238] Setting addon default-storageclass=true in "calico-764820"
	I1027 20:22:54.414934  102532 host.go:66] Checking if "calico-764820" exists ...
	I1027 20:22:54.415731  102532 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 20:22:54.415862  102532 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 20:22:54.417476  102532 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 20:22:54.417493  102532 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 20:22:54.420728  102532 main.go:141] libmachine: domain calico-764820 has defined MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:54.421281  102532 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:3d", ip: ""} in network mk-calico-764820: {Iface:virbr3 ExpiryTime:2025-10-27 21:22:25 +0000 UTC Type:0 Mac:52:54:00:4a:64:3d Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:calico-764820 Clientid:01:52:54:00:4a:64:3d}
	I1027 20:22:54.421313  102532 main.go:141] libmachine: domain calico-764820 has defined IP address 192.168.61.159 and MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:54.421666  102532 sshutil.go:53] new ssh client: &{IP:192.168.61.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/calico-764820/id_rsa Username:docker}
	I1027 20:22:54.421991  102532 main.go:141] libmachine: domain calico-764820 has defined MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:54.422522  102532 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4a:64:3d", ip: ""} in network mk-calico-764820: {Iface:virbr3 ExpiryTime:2025-10-27 21:22:25 +0000 UTC Type:0 Mac:52:54:00:4a:64:3d Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:calico-764820 Clientid:01:52:54:00:4a:64:3d}
	I1027 20:22:54.422552  102532 main.go:141] libmachine: domain calico-764820 has defined IP address 192.168.61.159 and MAC address 52:54:00:4a:64:3d in network mk-calico-764820
	I1027 20:22:54.422945  102532 sshutil.go:53] new ssh client: &{IP:192.168.61.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/calico-764820/id_rsa Username:docker}
	I1027 20:22:54.785339  102532 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 20:22:54.895638  102532 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 20:22:55.163619  102532 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 20:22:55.190170  102532 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 20:22:55.715477  102532 start.go:976] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1027 20:22:55.716784  102532 node_ready.go:35] waiting up to 15m0s for node "calico-764820" to be "Ready" ...
	I1027 20:22:56.166063  102532 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1027 20:22:56.168064  102532 addons.go:514] duration metric: took 1.758733228s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 20:22:56.220532  102532 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-764820" context rescaled to 1 replicas
	W1027 20:22:57.725059  102532 node_ready.go:57] node "calico-764820" has "Ready":"False" status (will retry)
	W1027 20:23:00.244912  102532 node_ready.go:57] node "calico-764820" has "Ready":"False" status (will retry)
	W1027 20:23:02.721824  102532 node_ready.go:57] node "calico-764820" has "Ready":"False" status (will retry)
	I1027 20:23:03.721822  102532 node_ready.go:49] node "calico-764820" is "Ready"
	I1027 20:23:03.721870  102532 node_ready.go:38] duration metric: took 8.005038814s for node "calico-764820" to be "Ready" ...
	I1027 20:23:03.721889  102532 api_server.go:52] waiting for apiserver process to appear ...
	I1027 20:23:03.721944  102532 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 20:23:03.772657  102532 api_server.go:72] duration metric: took 9.363264666s to wait for apiserver process to appear ...
	I1027 20:23:03.772691  102532 api_server.go:88] waiting for apiserver healthz status ...
	I1027 20:23:03.772716  102532 api_server.go:253] Checking apiserver healthz at https://192.168.61.159:8443/healthz ...
	I1027 20:23:03.779801  102532 api_server.go:279] https://192.168.61.159:8443/healthz returned 200:
	ok
	I1027 20:23:03.781013  102532 api_server.go:141] control plane version: v1.34.1
	I1027 20:23:03.781062  102532 api_server.go:131] duration metric: took 8.361888ms to wait for apiserver health ...
	I1027 20:23:03.781075  102532 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 20:23:03.786657  102532 system_pods.go:59] 9 kube-system pods found
	I1027 20:23:03.786698  102532 system_pods.go:61] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:23:03.786714  102532 system_pods.go:61] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:23:03.786725  102532 system_pods.go:61] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:23:03.786736  102532 system_pods.go:61] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:23:03.786749  102532 system_pods.go:61] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:23:03.786760  102532 system_pods.go:61] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:23:03.786768  102532 system_pods.go:61] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:23:03.786773  102532 system_pods.go:61] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:23:03.786785  102532 system_pods.go:61] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 20:23:03.786793  102532 system_pods.go:74] duration metric: took 5.709699ms to wait for pod list to return data ...
	I1027 20:23:03.786812  102532 default_sa.go:34] waiting for default service account to be created ...
	I1027 20:23:03.790259  102532 default_sa.go:45] found service account: "default"
	I1027 20:23:03.790279  102532 default_sa.go:55] duration metric: took 3.45972ms for default service account to be created ...
	I1027 20:23:03.790288  102532 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 20:23:03.795336  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:23:03.795396  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:23:03.795410  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:23:03.795426  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:23:03.795433  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:23:03.795444  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:23:03.795450  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:23:03.795458  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:23:03.795464  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:23:03.795471  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 20:23:03.795511  102532 retry.go:31] will retry after 240.390358ms: missing components: kube-dns
	I1027 20:23:04.041418  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:23:04.041454  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:23:04.041463  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:23:04.041471  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:23:04.041476  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:23:04.041481  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:23:04.041484  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:23:04.041488  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:23:04.041491  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:23:04.041496  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 20:23:04.041513  102532 retry.go:31] will retry after 375.232769ms: missing components: kube-dns
	I1027 20:23:04.422358  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:23:04.422392  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:23:04.422402  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:23:04.422410  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:23:04.422414  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:23:04.422419  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:23:04.422422  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:23:04.422429  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:23:04.422433  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:23:04.422436  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Running
	I1027 20:23:04.422451  102532 retry.go:31] will retry after 365.796533ms: missing components: kube-dns
	I1027 20:23:04.794520  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:23:04.794551  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:23:04.794559  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:23:04.794566  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:23:04.794570  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:23:04.794575  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:23:04.794578  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:23:04.794583  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:23:04.794586  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:23:04.794589  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Running
	I1027 20:23:04.794605  102532 retry.go:31] will retry after 400.587706ms: missing components: kube-dns
	I1027 20:23:05.200057  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:23:05.200089  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:23:05.200097  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:23:05.200109  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:23:05.200113  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:23:05.200118  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:23:05.200121  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:23:05.200129  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:23:05.200132  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:23:05.200136  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Running
	I1027 20:23:05.200156  102532 retry.go:31] will retry after 686.433839ms: missing components: kube-dns
	I1027 20:23:05.891997  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:23:05.892053  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:23:05.892062  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:23:05.892070  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:23:05.892074  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:23:05.892079  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:23:05.892082  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:23:05.892089  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:23:05.892092  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:23:05.892095  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Running
	I1027 20:23:05.892113  102532 retry.go:31] will retry after 942.209046ms: missing components: kube-dns
	I1027 20:23:06.839649  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:23:06.839686  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:23:06.839699  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:23:06.839707  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:23:06.839711  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:23:06.839716  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:23:06.839720  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:23:06.839724  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:23:06.839728  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:23:06.839730  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Running
	I1027 20:23:06.839747  102532 retry.go:31] will retry after 911.31558ms: missing components: kube-dns
	I1027 20:23:07.756594  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:23:07.756632  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:23:07.756643  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:23:07.756653  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:23:07.756658  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:23:07.756667  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:23:07.756671  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:23:07.756677  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:23:07.756683  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:23:07.756688  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Running
	I1027 20:23:07.756711  102532 retry.go:31] will retry after 1.255745109s: missing components: kube-dns
	I1027 20:23:09.019371  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:23:09.019405  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:23:09.019414  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:23:09.019427  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:23:09.019431  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:23:09.019436  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:23:09.019440  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:23:09.019443  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:23:09.019451  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:23:09.019454  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Running
	I1027 20:23:09.019472  102532 retry.go:31] will retry after 1.389125247s: missing components: kube-dns
	I1027 20:23:10.416671  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:23:10.416720  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:23:10.416746  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:23:10.416757  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:23:10.416775  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:23:10.416788  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:23:10.416794  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:23:10.416799  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:23:10.416804  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:23:10.416810  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Running
	I1027 20:23:10.416846  102532 retry.go:31] will retry after 2.015463531s: missing components: kube-dns
	I1027 20:23:12.437178  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:23:12.437214  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:23:12.437231  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:23:12.437237  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:23:12.437241  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:23:12.437247  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:23:12.437252  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:23:12.437258  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:23:12.437263  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:23:12.437268  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Running
	I1027 20:23:12.437290  102532 retry.go:31] will retry after 2.702603569s: missing components: kube-dns
	I1027 20:23:15.146508  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:23:15.146546  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:23:15.146556  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:23:15.146562  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:23:15.146566  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:23:15.146571  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:23:15.146574  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:23:15.146610  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:23:15.146614  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:23:15.146618  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Running
	I1027 20:23:15.146634  102532 retry.go:31] will retry after 3.457222744s: missing components: kube-dns
	I1027 20:23:18.613281  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:23:18.613328  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:23:18.613344  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:23:18.613353  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:23:18.613359  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:23:18.613368  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:23:18.613374  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:23:18.613381  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:23:18.613386  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:23:18.613391  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Running
	I1027 20:23:18.613420  102532 retry.go:31] will retry after 4.456735121s: missing components: kube-dns
	I1027 20:23:23.078027  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:23:23.078090  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:23:23.078103  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:23:23.078114  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:23:23.078121  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:23:23.078129  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:23:23.078148  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:23:23.078155  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:23:23.078160  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:23:23.078166  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Running
	I1027 20:23:23.078187  102532 retry.go:31] will retry after 3.796007846s: missing components: kube-dns
	I1027 20:23:26.880604  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:23:26.880647  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:23:26.880660  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:23:26.880670  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:23:26.880676  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:23:26.880685  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:23:26.880690  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:23:26.880696  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:23:26.880700  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:23:26.880703  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Running
	I1027 20:23:26.880719  102532 retry.go:31] will retry after 6.376682373s: missing components: kube-dns
	I1027 20:23:33.267787  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:23:33.267827  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:23:33.267840  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:23:33.267851  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:23:33.267857  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:23:33.267865  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:23:33.267871  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:23:33.267876  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:23:33.267881  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:23:33.267888  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Running
	I1027 20:23:33.267911  102532 retry.go:31] will retry after 8.421480402s: missing components: kube-dns
	I1027 20:23:41.695279  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:23:41.695320  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:23:41.695333  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:23:41.695344  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:23:41.695350  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:23:41.695357  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:23:41.695364  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:23:41.695370  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:23:41.695376  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:23:41.695381  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Running
	I1027 20:23:41.695401  102532 retry.go:31] will retry after 10.752062269s: missing components: kube-dns
	I1027 20:23:52.456538  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:23:52.456583  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:23:52.456598  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:23:52.456608  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:23:52.456614  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:23:52.456620  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:23:52.456625  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:23:52.456630  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:23:52.456635  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:23:52.456639  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Running
	I1027 20:23:52.456659  102532 retry.go:31] will retry after 8.609092454s: missing components: kube-dns
	I1027 20:24:01.077654  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:24:01.077695  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:24:01.077712  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:24:01.077723  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:24:01.077731  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:24:01.077740  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:24:01.077746  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:24:01.077752  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:24:01.077759  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:24:01.077764  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Running
	I1027 20:24:01.077785  102532 retry.go:31] will retry after 11.876664944s: missing components: kube-dns
	I1027 20:24:12.961929  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:24:12.961960  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:24:12.961971  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:24:12.961981  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:24:12.961985  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:24:12.961989  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:24:12.961992  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:24:12.961996  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:24:12.962000  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:24:12.962006  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Running
	I1027 20:24:12.962025  102532 retry.go:31] will retry after 19.041730097s: missing components: kube-dns
	I1027 20:24:32.009294  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:24:32.009335  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:24:32.009343  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:24:32.009362  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:24:32.009366  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:24:32.009371  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:24:32.009374  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:24:32.009377  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:24:32.009380  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:24:32.009383  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Running
	I1027 20:24:32.009401  102532 retry.go:31] will retry after 23.285222583s: missing components: kube-dns
	I1027 20:24:55.300147  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:24:55.300185  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:24:55.300194  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:24:55.300203  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:24:55.300207  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:24:55.300214  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:24:55.300217  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:24:55.300220  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:24:55.300223  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:24:55.300227  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Running
	I1027 20:24:55.300244  102532 retry.go:31] will retry after 28.023470119s: missing components: kube-dns
	I1027 20:25:23.328434  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:25:23.328474  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:25:23.328485  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:25:23.328492  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:25:23.328496  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:25:23.328500  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:25:23.328506  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:25:23.328512  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:25:23.328517  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:25:23.328521  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Running
	I1027 20:25:23.328542  102532 retry.go:31] will retry after 33.955809054s: missing components: kube-dns
	I1027 20:25:57.292204  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:25:57.292263  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:25:57.292275  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:25:57.292282  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:25:57.292285  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:25:57.292290  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:25:57.292293  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:25:57.292298  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:25:57.292302  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:25:57.292305  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Running
	I1027 20:25:57.292328  102532 retry.go:31] will retry after 39.83190789s: missing components: kube-dns
	I1027 20:26:37.131541  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:26:37.131579  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:26:37.131592  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:26:37.131599  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:26:37.131604  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:26:37.131610  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:26:37.131615  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:26:37.131622  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:26:37.131626  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:26:37.131631  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Running
	I1027 20:26:37.131657  102532 retry.go:31] will retry after 54.667102526s: missing components: kube-dns
	I1027 20:27:31.805331  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:27:31.805371  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:27:31.805381  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:27:31.805387  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:27:31.805391  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:27:31.805396  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:27:31.805400  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:27:31.805404  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:27:31.805407  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:27:31.805410  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Running
	I1027 20:27:31.805433  102532 retry.go:31] will retry after 57.776487034s: missing components: kube-dns
	I1027 20:28:29.587913  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:28:29.587962  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:28:29.587973  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:28:29.587979  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:28:29.587984  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:28:29.587990  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:28:29.587995  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:28:29.588001  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:28:29.588006  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:28:29.588010  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Running
	I1027 20:28:29.588047  102532 retry.go:31] will retry after 59.697150358s: missing components: kube-dns
	I1027 20:29:29.290804  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:29:29.290846  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:29:29.290858  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:29:29.290865  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:29:29.290869  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:29:29.290873  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:29:29.290876  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:29:29.290880  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:29:29.290884  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:29:29.290886  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Running
	I1027 20:29:29.290907  102532 retry.go:31] will retry after 1m1.287844967s: missing components: kube-dns
	I1027 20:30:30.583184  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:30:30.583228  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:30:30.583240  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:30:30.583247  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:30:30.583251  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:30:30.583255  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:30:30.583258  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:30:30.583263  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:30:30.583266  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:30:30.583269  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Running
	I1027 20:30:30.583287  102532 retry.go:31] will retry after 1m14.652154911s: missing components: kube-dns
	I1027 20:31:45.241658  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:31:45.241699  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:31:45.241710  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:31:45.241717  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:31:45.241722  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:31:45.241726  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:31:45.241729  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:31:45.241734  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:31:45.241737  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:31:45.241740  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Running
	I1027 20:31:45.241757  102532 retry.go:31] will retry after 52.455401438s: missing components: kube-dns
	I1027 20:32:37.702188  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:32:37.702229  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:32:37.702237  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:32:37.702244  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:32:37.702248  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:32:37.702253  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:32:37.702256  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:32:37.702259  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:32:37.702263  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:32:37.702266  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Running
	I1027 20:32:37.702285  102532 retry.go:31] will retry after 45.036953244s: missing components: kube-dns
	I1027 20:33:22.748465  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:33:22.748503  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:33:22.748516  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:33:22.748526  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:33:22.748530  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:33:22.748534  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:33:22.748537  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:33:22.748542  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:33:22.748545  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:33:22.748548  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Running
	I1027 20:33:22.748563  102532 retry.go:31] will retry after 1m5.375625389s: missing components: kube-dns
	I1027 20:34:28.132298  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:34:28.132345  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:34:28.132360  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:34:28.132369  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:34:28.132375  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:34:28.132382  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:34:28.132388  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:34:28.132394  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:34:28.132399  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:34:28.132404  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Running
	I1027 20:34:28.132429  102532 retry.go:31] will retry after 56.874817472s: missing components: kube-dns
	I1027 20:35:25.013366  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:35:25.013412  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:35:25.013421  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:35:25.013428  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:35:25.013431  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:35:25.013436  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:35:25.013439  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:35:25.013442  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:35:25.013446  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:35:25.013449  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Running
	I1027 20:35:25.013466  102532 retry.go:31] will retry after 1m4.3203007s: missing components: kube-dns
	I1027 20:36:29.338361  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:36:29.338410  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:36:29.338427  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:36:29.338436  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:36:29.338442  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:36:29.338449  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:36:29.338454  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:36:29.338461  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:36:29.338467  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:36:29.338475  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Running
	I1027 20:36:29.338498  102532 retry.go:31] will retry after 1m8.170319822s: missing components: kube-dns
	I1027 20:37:37.516526  102532 system_pods.go:86] 9 kube-system pods found
	I1027 20:37:37.516565  102532 system_pods.go:89] "calico-kube-controllers-59556d9b4c-5l66g" [edd93731-208f-498a-a685-7d5849495c40] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1027 20:37:37.516575  102532 system_pods.go:89] "calico-node-cqqwf" [f17d66c8-ce8c-4b30-8300-52a47ebcc655] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1027 20:37:37.516582  102532 system_pods.go:89] "coredns-66bc5c9577-b9k24" [b73362bb-1c6d-4384-bacc-8565280b9913] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:37:37.516588  102532 system_pods.go:89] "etcd-calico-764820" [cdc36de8-a383-4ab4-84e6-966b74c76ea5] Running
	I1027 20:37:37.516593  102532 system_pods.go:89] "kube-apiserver-calico-764820" [1cb677ec-5c92-459b-8c4b-b7216966cec6] Running
	I1027 20:37:37.516598  102532 system_pods.go:89] "kube-controller-manager-calico-764820" [c3543bcc-91b2-4e93-acac-9a0f055be028] Running
	I1027 20:37:37.516602  102532 system_pods.go:89] "kube-proxy-gcnql" [2da654a7-01ec-4e28-aeeb-49d86a3c4f39] Running
	I1027 20:37:37.516605  102532 system_pods.go:89] "kube-scheduler-calico-764820" [4efeef4f-bfb7-40b7-884c-ee40b6a972b4] Running
	I1027 20:37:37.516608  102532 system_pods.go:89] "storage-provisioner" [57175d62-72dd-4719-87d1-bf7c8c9d9ef7] Running
	I1027 20:37:37.519061  102532 out.go:203] 
	W1027 20:37:37.520845  102532 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	W1027 20:37:37.520867  102532 out.go:285] * 
	* 
	W1027 20:37:37.522774  102532 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 20:37:37.524393  102532 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (943.57s)

                                                
                                    

Test pass (282/336)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.46
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 3.25
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.16
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.64
22 TestOffline 103.26
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 140.9
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 10.56
35 TestAddons/parallel/Registry 16.12
36 TestAddons/parallel/RegistryCreds 0.68
38 TestAddons/parallel/InspektorGadget 6.31
39 TestAddons/parallel/MetricsServer 6.37
42 TestAddons/parallel/Headlamp 20.01
43 TestAddons/parallel/CloudSpanner 6.55
45 TestAddons/parallel/NvidiaDevicePlugin 6.9
46 TestAddons/parallel/Yakd 11.81
48 TestAddons/StoppedEnableDisable 90.11
49 TestCertOptions 79.44
50 TestCertExpiration 288.69
52 TestForceSystemdFlag 84.49
53 TestForceSystemdEnv 43.94
58 TestErrorSpam/setup 43.88
59 TestErrorSpam/start 0.35
60 TestErrorSpam/status 0.69
61 TestErrorSpam/pause 1.63
62 TestErrorSpam/unpause 1.94
63 TestErrorSpam/stop 88.16
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 85.93
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 30.93
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.44
75 TestFunctional/serial/CacheCmd/cache/add_local 1.14
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.19
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.6
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 33.66
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.54
86 TestFunctional/serial/LogsFileCmd 1.55
87 TestFunctional/serial/InvalidService 3.94
89 TestFunctional/parallel/ConfigCmd 0.45
91 TestFunctional/parallel/DryRun 0.28
92 TestFunctional/parallel/InternationalLanguage 0.14
93 TestFunctional/parallel/StatusCmd 0.78
98 TestFunctional/parallel/AddonsCmd 0.15
101 TestFunctional/parallel/SSHCmd 0.34
102 TestFunctional/parallel/CpCmd 0.99
104 TestFunctional/parallel/FileSync 0.16
105 TestFunctional/parallel/CertSync 0.97
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.4
113 TestFunctional/parallel/License 0.29
114 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
115 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
116 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.07
117 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
118 TestFunctional/parallel/Version/short 0.06
119 TestFunctional/parallel/Version/components 0.45
120 TestFunctional/parallel/ImageCommands/ImageListShort 0.19
121 TestFunctional/parallel/ImageCommands/ImageListTable 0.19
122 TestFunctional/parallel/ImageCommands/ImageListJson 0.2
123 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
124 TestFunctional/parallel/ImageCommands/ImageBuild 2.78
125 TestFunctional/parallel/ImageCommands/Setup 1.12
126 TestFunctional/parallel/ProfileCmd/profile_list 0.39
127 TestFunctional/parallel/MountCmd/any-port 37.88
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
129 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.01
130 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.97
131 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.01
132 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
133 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
134 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.77
135 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.56
145 TestFunctional/parallel/MountCmd/specific-port 1.13
146 TestFunctional/parallel/MountCmd/VerifyCleanup 1.2
148 TestFunctional/parallel/ServiceCmd/List 1.2
149 TestFunctional/parallel/ServiceCmd/JSONOutput 1.2
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 197.82
161 TestMultiControlPlane/serial/DeployApp 6.46
162 TestMultiControlPlane/serial/PingHostFromPods 1.32
163 TestMultiControlPlane/serial/AddWorkerNode 45.66
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.69
166 TestMultiControlPlane/serial/CopyFile 10.71
167 TestMultiControlPlane/serial/StopSecondaryNode 89.77
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.56
169 TestMultiControlPlane/serial/RestartSecondaryNode 51.32
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.76
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 393.36
172 TestMultiControlPlane/serial/DeleteSecondaryNode 18.12
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.53
174 TestMultiControlPlane/serial/StopCluster 245.21
175 TestMultiControlPlane/serial/RestartCluster 99.02
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.52
177 TestMultiControlPlane/serial/AddSecondaryNode 80.46
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.73
183 TestJSONOutput/start/Command 82.69
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.79
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.67
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 8.32
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.24
211 TestMainNoArgs 0.06
212 TestMinikubeProfile 87.93
215 TestMountStart/serial/StartWithMountFirst 21.33
216 TestMountStart/serial/VerifyMountFirst 0.32
217 TestMountStart/serial/StartWithMountSecond 21.83
218 TestMountStart/serial/VerifyMountSecond 0.3
219 TestMountStart/serial/DeleteFirst 0.71
220 TestMountStart/serial/VerifyMountPostDelete 0.31
221 TestMountStart/serial/Stop 1.32
222 TestMountStart/serial/RestartStopped 17.85
223 TestMountStart/serial/VerifyMountPostStop 0.32
226 TestMultiNode/serial/FreshStart2Nodes 102.31
227 TestMultiNode/serial/DeployApp2Nodes 5.32
228 TestMultiNode/serial/PingHostFrom2Pods 0.87
229 TestMultiNode/serial/AddNode 43.74
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.47
232 TestMultiNode/serial/CopyFile 6.09
233 TestMultiNode/serial/StopNode 2.38
234 TestMultiNode/serial/StartAfterStop 44.2
235 TestMultiNode/serial/RestartKeepsNodes 302.93
236 TestMultiNode/serial/DeleteNode 2.75
237 TestMultiNode/serial/StopMultiNode 148.89
238 TestMultiNode/serial/RestartMultiNode 87.92
239 TestMultiNode/serial/ValidateNameConflict 45.02
246 TestScheduledStopUnix 112.63
250 TestRunningBinaryUpgrade 159.88
252 TestKubernetesUpgrade 165.71
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
256 TestNoKubernetes/serial/StartWithK8s 85.29
257 TestNoKubernetes/serial/StartWithStopK8s 32.76
258 TestNoKubernetes/serial/Start 47.24
260 TestPause/serial/Start 82.21
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.17
262 TestNoKubernetes/serial/ProfileList 0.81
263 TestNoKubernetes/serial/Stop 1.38
264 TestNoKubernetes/serial/StartNoArgs 60.01
279 TestNetworkPlugins/group/false 4.97
283 TestISOImage/Setup 27.98
284 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.18
287 TestISOImage/Binaries/crictl 0.16
288 TestISOImage/Binaries/curl 0.17
289 TestISOImage/Binaries/docker 0.17
290 TestISOImage/Binaries/git 0.17
291 TestISOImage/Binaries/iptables 0.17
292 TestISOImage/Binaries/podman 0.17
293 TestISOImage/Binaries/rsync 0.17
294 TestISOImage/Binaries/socat 0.17
295 TestISOImage/Binaries/wget 0.16
296 TestISOImage/Binaries/VBoxControl 0.17
297 TestISOImage/Binaries/VBoxService 0.17
298 TestStoppedBinaryUpgrade/Setup 0.75
299 TestStoppedBinaryUpgrade/Upgrade 127.12
301 TestStartStop/group/old-k8s-version/serial/FirstStart 100.5
302 TestStoppedBinaryUpgrade/MinikubeLogs 1.15
304 TestStartStop/group/no-preload/serial/FirstStart 106.35
306 TestStartStop/group/embed-certs/serial/FirstStart 100.06
308 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 121.31
309 TestStartStop/group/old-k8s-version/serial/DeployApp 10.37
310 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.31
311 TestStartStop/group/old-k8s-version/serial/Stop 85.74
312 TestStartStop/group/no-preload/serial/DeployApp 11.29
313 TestStartStop/group/embed-certs/serial/DeployApp 9.28
314 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.97
315 TestStartStop/group/no-preload/serial/Stop 88.96
316 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.01
317 TestStartStop/group/embed-certs/serial/Stop 88.72
318 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.14
319 TestStartStop/group/old-k8s-version/serial/SecondStart 44.64
320 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.29
321 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.99
322 TestStartStop/group/default-k8s-diff-port/serial/Stop 87.22
323 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 15.01
324 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
325 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.19
326 TestStartStop/group/old-k8s-version/serial/Pause 2.5
328 TestStartStop/group/newest-cni/serial/FirstStart 44.35
329 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.14
330 TestStartStop/group/no-preload/serial/SecondStart 73.2
331 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.16
332 TestStartStop/group/embed-certs/serial/SecondStart 73.65
333 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
334 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 73.68
335 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.51
337 TestStartStop/group/newest-cni/serial/Stop 7.22
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.15
339 TestStartStop/group/newest-cni/serial/SecondStart 67.31
340 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
341 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 13.01
342 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.1
343 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
344 TestStartStop/group/no-preload/serial/Pause 2.91
345 TestNetworkPlugins/group/auto/Start 88.14
346 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.14
347 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
348 TestStartStop/group/embed-certs/serial/Pause 4.17
349 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 7.01
350 TestNetworkPlugins/group/kindnet/Start 101.18
351 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
352 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
353 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.57
354 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
355 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
356 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
357 TestStartStop/group/newest-cni/serial/Pause 3.43
359 TestNetworkPlugins/group/custom-flannel/Start 105.63
360 TestNetworkPlugins/group/auto/KubeletFlags 0.23
361 TestNetworkPlugins/group/auto/NetCatPod 14.3
362 TestNetworkPlugins/group/auto/DNS 0.18
363 TestNetworkPlugins/group/auto/Localhost 0.16
364 TestNetworkPlugins/group/auto/HairPin 0.22
365 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
366 TestNetworkPlugins/group/enable-default-cni/Start 87.07
367 TestNetworkPlugins/group/kindnet/KubeletFlags 0.19
368 TestNetworkPlugins/group/kindnet/NetCatPod 11.24
369 TestNetworkPlugins/group/kindnet/DNS 0.19
370 TestNetworkPlugins/group/kindnet/Localhost 0.15
371 TestNetworkPlugins/group/kindnet/HairPin 0.17
372 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.18
373 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.27
374 TestNetworkPlugins/group/custom-flannel/DNS 0.2
375 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
376 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
377 TestNetworkPlugins/group/flannel/Start 71.48
378 TestNetworkPlugins/group/bridge/Start 95.03
379 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.18
380 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.24
381 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
382 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
383 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
384 TestNetworkPlugins/group/flannel/ControllerPod 6.01
385 TestNetworkPlugins/group/flannel/KubeletFlags 0.19
386 TestNetworkPlugins/group/flannel/NetCatPod 11.26
388 TestISOImage/PersistentMounts//data 0.17
389 TestISOImage/PersistentMounts//var/lib/docker 0.17
390 TestISOImage/PersistentMounts//var/lib/cni 0.17
391 TestISOImage/PersistentMounts//var/lib/kubelet 0.17
392 TestISOImage/PersistentMounts//var/lib/minikube 0.17
393 TestISOImage/PersistentMounts//var/lib/toolbox 0.16
394 TestISOImage/PersistentMounts//var/lib/boot2docker 0.16
395 TestNetworkPlugins/group/flannel/DNS 0.15
396 TestNetworkPlugins/group/flannel/Localhost 0.14
397 TestNetworkPlugins/group/flannel/HairPin 0.13
398 TestNetworkPlugins/group/bridge/KubeletFlags 0.18
399 TestNetworkPlugins/group/bridge/NetCatPod 9.23
400 TestNetworkPlugins/group/bridge/DNS 0.15
401 TestNetworkPlugins/group/bridge/Localhost 0.12
402 TestNetworkPlugins/group/bridge/HairPin 0.11
x
+
TestDownloadOnly/v1.28.0/json-events (7.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-021762 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-021762 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (7.461264715s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1027 18:56:19.577808   62705 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1027 18:56:19.577902   62705 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-58821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-021762
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-021762: exit status 85 (73.776733ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-021762 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-021762 │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 18:56:12
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 18:56:12.170895   62717 out.go:360] Setting OutFile to fd 1 ...
	I1027 18:56:12.170989   62717 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:56:12.170994   62717 out.go:374] Setting ErrFile to fd 2...
	I1027 18:56:12.170998   62717 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:56:12.171219   62717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-58821/.minikube/bin
	W1027 18:56:12.171344   62717 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21801-58821/.minikube/config/config.json: open /home/jenkins/minikube-integration/21801-58821/.minikube/config/config.json: no such file or directory
	I1027 18:56:12.171804   62717 out.go:368] Setting JSON to true
	I1027 18:56:12.172697   62717 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5922,"bootTime":1761585450,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 18:56:12.172798   62717 start.go:141] virtualization: kvm guest
	I1027 18:56:12.175051   62717 out.go:99] [download-only-021762] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 18:56:12.175207   62717 notify.go:220] Checking for updates...
	W1027 18:56:12.175219   62717 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21801-58821/.minikube/cache/preloaded-tarball: no such file or directory
	I1027 18:56:12.176556   62717 out.go:171] MINIKUBE_LOCATION=21801
	I1027 18:56:12.177874   62717 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 18:56:12.179573   62717 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21801-58821/kubeconfig
	I1027 18:56:12.180825   62717 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-58821/.minikube
	I1027 18:56:12.181984   62717 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1027 18:56:12.184176   62717 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1027 18:56:12.184455   62717 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 18:56:12.218136   62717 out.go:99] Using the kvm2 driver based on user configuration
	I1027 18:56:12.218175   62717 start.go:305] selected driver: kvm2
	I1027 18:56:12.218182   62717 start.go:925] validating driver "kvm2" against <nil>
	I1027 18:56:12.218532   62717 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1027 18:56:12.219098   62717 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1027 18:56:12.219257   62717 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1027 18:56:12.219282   62717 cni.go:84] Creating CNI manager for ""
	I1027 18:56:12.219336   62717 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1027 18:56:12.219348   62717 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1027 18:56:12.219429   62717 start.go:349] cluster config:
	{Name:download-only-021762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-021762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 18:56:12.219601   62717 iso.go:125] acquiring lock: {Name:mkbd04910579486806c142a651be4f82498c73ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 18:56:12.221358   62717 out.go:99] Downloading VM boot image ...
	I1027 18:56:12.221405   62717 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21801-58821/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1027 18:56:15.721398   62717 out.go:99] Starting "download-only-021762" primary control-plane node in "download-only-021762" cluster
	I1027 18:56:15.721451   62717 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1027 18:56:15.741868   62717 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1027 18:56:15.741918   62717 cache.go:58] Caching tarball of preloaded images
	I1027 18:56:15.742169   62717 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1027 18:56:15.743867   62717 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1027 18:56:15.743891   62717 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1027 18:56:15.766632   62717 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1027 18:56:15.766776   62717 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21801-58821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-021762 host does not exist
	  To start a cluster, run: "minikube start -p download-only-021762"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-021762
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-343850 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-343850 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.25377314s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1027 18:56:23.206404   62705 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1027 18:56:23.206439   62705 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-58821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-343850
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-343850: exit status 85 (73.637733ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-021762 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-021762 │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ delete  │ -p download-only-021762                                                                                                                                                 │ download-only-021762 │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ start   │ -o=json --download-only -p download-only-343850 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-343850 │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 18:56:20
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 18:56:20.004276   62894 out.go:360] Setting OutFile to fd 1 ...
	I1027 18:56:20.004571   62894 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:56:20.004582   62894 out.go:374] Setting ErrFile to fd 2...
	I1027 18:56:20.004587   62894 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:56:20.004835   62894 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-58821/.minikube/bin
	I1027 18:56:20.005398   62894 out.go:368] Setting JSON to true
	I1027 18:56:20.006250   62894 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5930,"bootTime":1761585450,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 18:56:20.006340   62894 start.go:141] virtualization: kvm guest
	I1027 18:56:20.008255   62894 out.go:99] [download-only-343850] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 18:56:20.008441   62894 notify.go:220] Checking for updates...
	I1027 18:56:20.009776   62894 out.go:171] MINIKUBE_LOCATION=21801
	I1027 18:56:20.011348   62894 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 18:56:20.012849   62894 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21801-58821/kubeconfig
	I1027 18:56:20.014169   62894 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-58821/.minikube
	I1027 18:56:20.015417   62894 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-343850 host does not exist
	  To start a cluster, run: "minikube start -p download-only-343850"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-343850
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.64s)

                                                
                                                
=== RUN   TestBinaryMirror
I1027 18:56:23.868757   62705 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-001257 --alsologtostderr --binary-mirror http://127.0.0.1:33585 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-001257" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-001257
--- PASS: TestBinaryMirror (0.64s)

                                                
                                    
x
+
TestOffline (103.26s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-382502 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-382502 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m42.345047172s)
helpers_test.go:175: Cleaning up "offline-crio-382502" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-382502
--- PASS: TestOffline (103.26s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-864929
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-864929: exit status 85 (64.714902ms)

                                                
                                                
-- stdout --
	* Profile "addons-864929" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-864929"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-864929
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-864929: exit status 85 (65.001179ms)

                                                
                                                
-- stdout --
	* Profile "addons-864929" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-864929"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (140.9s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-864929 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-864929 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m20.894981458s)
--- PASS: TestAddons/Setup (140.90s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-864929 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-864929 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.56s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-864929 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-864929 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a668ad58-4082-4722-a352-3bd62c30df9b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a668ad58-4082-4722-a352-3bd62c30df9b] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.006462085s
addons_test.go:694: (dbg) Run:  kubectl --context addons-864929 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-864929 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-864929 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.56s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 7.907336ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-wrthd" [cfcc8422-d46c-42b9-a799-37210505af96] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006620198s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-6grgg" [80e2894b-b354-44d6-8c93-8c9a8f5ec644] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006480468s
addons_test.go:392: (dbg) Run:  kubectl --context addons-864929 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-864929 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-864929 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.32885796s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-864929 ip
2025/10/27 18:59:20 [DEBUG] GET http://192.168.39.216:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-864929 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.12s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.68s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.077277ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-864929
addons_test.go:332: (dbg) Run:  kubectl --context addons-864929 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-864929 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.68s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.31s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-5bx7q" [ef4b0394-4dee-4b23-bee8-0787117f056f] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004602398s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-864929 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.31s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.37s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 9.565612ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-7z96j" [332bcd8d-855b-409e-8a4c-c788da3ed019] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008400225s
addons_test.go:463: (dbg) Run:  kubectl --context addons-864929 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-864929 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-864929 addons disable metrics-server --alsologtostderr -v=1: (1.269404865s)
--- PASS: TestAddons/parallel/MetricsServer (6.37s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-864929 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-47vt5" [c0075a1f-48fe-40a0-b5bc-732b0f20bb62] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-47vt5" [c0075a1f-48fe-40a0-b5bc-732b0f20bb62] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-47vt5" [c0075a1f-48fe-40a0-b5bc-732b0f20bb62] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.006040244s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-864929 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-864929 addons disable headlamp --alsologtostderr -v=1: (6.019622196s)
--- PASS: TestAddons/parallel/Headlamp (20.01s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-5vxhg" [8ec00526-511c-4f82-a854-5b61f8cae321] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003946199s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-864929 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.55s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.9s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-dq69s" [7048c489-be31-4c98-a8ea-455c9506a937] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.029071439s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-864929 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.90s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-g6rq6" [050fbdfe-54ad-4979-8871-a78a4f5ff542] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004850998s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-864929 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-864929 addons disable yakd --alsologtostderr -v=1: (5.80384569s)
--- PASS: TestAddons/parallel/Yakd (11.81s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (90.11s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-864929
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-864929: (1m29.904053131s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-864929
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-864929
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-864929
--- PASS: TestAddons/StoppedEnableDisable (90.11s)

                                                
                                    
x
+
TestCertOptions (79.44s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-523509 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-523509 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m17.549491238s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-523509 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-523509 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-523509 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-523509" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-523509
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-523509: (1.481622787s)
--- PASS: TestCertOptions (79.44s)

                                                
                                    
x
+
TestCertExpiration (288.69s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-888375 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-888375 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (53.692397071s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-888375 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-888375 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (54.025214975s)
helpers_test.go:175: Cleaning up "cert-expiration-888375" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-888375
--- PASS: TestCertExpiration (288.69s)

                                                
                                    
x
+
TestForceSystemdFlag (84.49s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-643469 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-643469 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m23.412173909s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-643469 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-643469" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-643469
--- PASS: TestForceSystemdFlag (84.49s)

                                                
                                    
x
+
TestForceSystemdEnv (43.94s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-457836 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-457836 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (42.66740927s)
helpers_test.go:175: Cleaning up "force-systemd-env-457836" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-457836
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-457836: (1.269331269s)
--- PASS: TestForceSystemdEnv (43.94s)

                                                
                                    
x
+
TestErrorSpam/setup (43.88s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-350567 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-350567 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-350567 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-350567 --driver=kvm2  --container-runtime=crio: (43.877101753s)
--- PASS: TestErrorSpam/setup (43.88s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350567 --log_dir /tmp/nospam-350567 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350567 --log_dir /tmp/nospam-350567 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350567 --log_dir /tmp/nospam-350567 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350567 --log_dir /tmp/nospam-350567 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350567 --log_dir /tmp/nospam-350567 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350567 --log_dir /tmp/nospam-350567 status
--- PASS: TestErrorSpam/status (0.69s)

                                                
                                    
x
+
TestErrorSpam/pause (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350567 --log_dir /tmp/nospam-350567 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350567 --log_dir /tmp/nospam-350567 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350567 --log_dir /tmp/nospam-350567 pause
--- PASS: TestErrorSpam/pause (1.63s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.94s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350567 --log_dir /tmp/nospam-350567 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350567 --log_dir /tmp/nospam-350567 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350567 --log_dir /tmp/nospam-350567 unpause
--- PASS: TestErrorSpam/unpause (1.94s)

                                                
                                    
x
+
TestErrorSpam/stop (88.16s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350567 --log_dir /tmp/nospam-350567 stop
E1027 19:08:46.130368   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:08:46.136877   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:08:46.148388   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:08:46.169878   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:08:46.211424   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:08:46.292991   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:08:46.454631   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:08:46.776412   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:08:47.418543   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:08:48.700201   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:08:51.263195   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:08:56.384969   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:09:06.626721   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-350567 --log_dir /tmp/nospam-350567 stop: (1m25.12515841s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350567 --log_dir /tmp/nospam-350567 stop
E1027 19:09:27.108097   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-350567 --log_dir /tmp/nospam-350567 stop: (1.827175775s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-350567 --log_dir /tmp/nospam-350567 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-350567 --log_dir /tmp/nospam-350567 stop: (1.207451539s)
--- PASS: TestErrorSpam/stop (88.16s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21801-58821/.minikube/files/etc/test/nested/copy/62705/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (85.93s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-074768 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1027 19:10:08.070808   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-074768 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m25.932486903s)
--- PASS: TestFunctional/serial/StartWithProxy (85.93s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (30.93s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1027 19:10:55.817904   62705 config.go:182] Loaded profile config "functional-074768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-074768 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-074768 --alsologtostderr -v=8: (30.932036906s)
functional_test.go:678: soft start took 30.93293284s for "functional-074768" cluster.
I1027 19:11:26.750391   62705 config.go:182] Loaded profile config "functional-074768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (30.93s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-074768 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-074768 cache add registry.k8s.io/pause:3.1: (1.065416246s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-074768 cache add registry.k8s.io/pause:3.3: (1.218153323s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 cache add registry.k8s.io/pause:latest
E1027 19:11:29.992490   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-074768 cache add registry.k8s.io/pause:latest: (1.155822186s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-074768 /tmp/TestFunctionalserialCacheCmdcacheadd_local1885696722/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 cache add minikube-local-cache-test:functional-074768
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 cache delete minikube-local-cache-test:functional-074768
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-074768
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074768 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (179.108807ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-074768 cache reload: (1.020745352s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 kubectl -- --context functional-074768 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-074768 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.66s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-074768 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-074768 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.6567893s)
functional_test.go:776: restart took 33.656961071s for "functional-074768" cluster.
I1027 19:12:07.409382   62705 config.go:182] Loaded profile config "functional-074768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (33.66s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-074768 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-074768 logs: (1.54083362s)
--- PASS: TestFunctional/serial/LogsCmd (1.54s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 logs --file /tmp/TestFunctionalserialLogsFileCmd2136358219/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-074768 logs --file /tmp/TestFunctionalserialLogsFileCmd2136358219/001/logs.txt: (1.548260279s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.94s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-074768 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-074768
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-074768: exit status 115 (242.998498ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.117:30112 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-074768 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.94s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074768 config get cpus: exit status 14 (66.669542ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074768 config get cpus: exit status 14 (63.719049ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-074768 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-074768 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (138.027728ms)

                                                
                                                
-- stdout --
	* [functional-074768] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21801
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21801-58821/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-58821/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:12:15.828189   68980 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:12:15.828319   68980 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:12:15.828332   68980 out.go:374] Setting ErrFile to fd 2...
	I1027 19:12:15.828340   68980 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:12:15.828604   68980 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-58821/.minikube/bin
	I1027 19:12:15.829119   68980 out.go:368] Setting JSON to false
	I1027 19:12:15.830064   68980 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6886,"bootTime":1761585450,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 19:12:15.830156   68980 start.go:141] virtualization: kvm guest
	I1027 19:12:15.833309   68980 out.go:179] * [functional-074768] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 19:12:15.834808   68980 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:12:15.834872   68980 notify.go:220] Checking for updates...
	I1027 19:12:15.838206   68980 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:12:15.839652   68980 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-58821/kubeconfig
	I1027 19:12:15.840973   68980 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-58821/.minikube
	I1027 19:12:15.842354   68980 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 19:12:15.844015   68980 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:12:15.846156   68980 config.go:182] Loaded profile config "functional-074768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:12:15.846655   68980 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:12:15.888494   68980 out.go:179] * Using the kvm2 driver based on existing profile
	I1027 19:12:15.889833   68980 start.go:305] selected driver: kvm2
	I1027 19:12:15.889853   68980 start.go:925] validating driver "kvm2" against &{Name:functional-074768 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-074768 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:12:15.889988   68980 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:12:15.893437   68980 out.go:203] 
	W1027 19:12:15.894860   68980 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1027 19:12:15.896830   68980 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-074768 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-074768 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-074768 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (136.366631ms)

                                                
                                                
-- stdout --
	* [functional-074768] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21801
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21801-58821/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-58821/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:12:15.693129   68945 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:12:15.693274   68945 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:12:15.693287   68945 out.go:374] Setting ErrFile to fd 2...
	I1027 19:12:15.693294   68945 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:12:15.693757   68945 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-58821/.minikube/bin
	I1027 19:12:15.694371   68945 out.go:368] Setting JSON to false
	I1027 19:12:15.695595   68945 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6886,"bootTime":1761585450,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 19:12:15.695732   68945 start.go:141] virtualization: kvm guest
	I1027 19:12:15.698499   68945 out.go:179] * [functional-074768] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1027 19:12:15.700084   68945 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:12:15.700080   68945 notify.go:220] Checking for updates...
	I1027 19:12:15.702847   68945 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:12:15.704176   68945 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-58821/kubeconfig
	I1027 19:12:15.705392   68945 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-58821/.minikube
	I1027 19:12:15.706801   68945 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 19:12:15.708309   68945 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:12:15.710555   68945 config.go:182] Loaded profile config "functional-074768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:12:15.711153   68945 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:12:15.750429   68945 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1027 19:12:15.752428   68945 start.go:305] selected driver: kvm2
	I1027 19:12:15.752446   68945 start.go:925] validating driver "kvm2" against &{Name:functional-074768 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-074768 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:12:15.752554   68945 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:12:15.755176   68945 out.go:203] 
	W1027 19:12:15.756539   68945 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1027 19:12:15.758341   68945 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 ssh -n functional-074768 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 cp functional-074768:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3501193272/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 ssh -n functional-074768 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 ssh -n functional-074768 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/62705/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 ssh "sudo cat /etc/test/nested/copy/62705/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/62705.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 ssh "sudo cat /etc/ssl/certs/62705.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/62705.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 ssh "sudo cat /usr/share/ca-certificates/62705.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/627052.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 ssh "sudo cat /etc/ssl/certs/627052.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/627052.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 ssh "sudo cat /usr/share/ca-certificates/627052.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-074768 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074768 ssh "sudo systemctl is-active docker": exit status 1 (195.08197ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074768 ssh "sudo systemctl is-active containerd": exit status 1 (200.498014ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-074768 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-074768
localhost/kicbase/echo-server:functional-074768
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-074768 image ls --format short --alsologtostderr:
I1027 19:18:31.074074   71520 out.go:360] Setting OutFile to fd 1 ...
I1027 19:18:31.074321   71520 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:18:31.074329   71520 out.go:374] Setting ErrFile to fd 2...
I1027 19:18:31.074333   71520 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:18:31.074506   71520 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-58821/.minikube/bin
I1027 19:18:31.075245   71520 config.go:182] Loaded profile config "functional-074768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 19:18:31.075378   71520 config.go:182] Loaded profile config "functional-074768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 19:18:31.077532   71520 ssh_runner.go:195] Run: systemctl --version
I1027 19:18:31.079612   71520 main.go:141] libmachine: domain functional-074768 has defined MAC address 52:54:00:de:c6:59 in network mk-functional-074768
I1027 19:18:31.080103   71520 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:de:c6:59", ip: ""} in network mk-functional-074768: {Iface:virbr1 ExpiryTime:2025-10-27 20:09:46 +0000 UTC Type:0 Mac:52:54:00:de:c6:59 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:functional-074768 Clientid:01:52:54:00:de:c6:59}
I1027 19:18:31.080131   71520 main.go:141] libmachine: domain functional-074768 has defined IP address 192.168.39.117 and MAC address 52:54:00:de:c6:59 in network mk-functional-074768
I1027 19:18:31.080296   71520 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/functional-074768/id_rsa Username:docker}
I1027 19:18:31.159551   71520 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-074768 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/minikube-local-cache-test     │ functional-074768  │ 0cb0c0fd72b8e │ 3.33kB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ localhost/kicbase/echo-server           │ functional-074768  │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/my-image                      │ functional-074768  │ c6d2dd4b4f786 │ 1.47MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-074768 image ls --format table --alsologtostderr:
I1027 19:18:34.439438   71602 out.go:360] Setting OutFile to fd 1 ...
I1027 19:18:34.439671   71602 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:18:34.439679   71602 out.go:374] Setting ErrFile to fd 2...
I1027 19:18:34.439683   71602 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:18:34.439897   71602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-58821/.minikube/bin
I1027 19:18:34.440480   71602 config.go:182] Loaded profile config "functional-074768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 19:18:34.440577   71602 config.go:182] Loaded profile config "functional-074768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 19:18:34.442639   71602 ssh_runner.go:195] Run: systemctl --version
I1027 19:18:34.444921   71602 main.go:141] libmachine: domain functional-074768 has defined MAC address 52:54:00:de:c6:59 in network mk-functional-074768
I1027 19:18:34.445353   71602 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:de:c6:59", ip: ""} in network mk-functional-074768: {Iface:virbr1 ExpiryTime:2025-10-27 20:09:46 +0000 UTC Type:0 Mac:52:54:00:de:c6:59 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:functional-074768 Clientid:01:52:54:00:de:c6:59}
I1027 19:18:34.445379   71602 main.go:141] libmachine: domain functional-074768 has defined IP address 192.168.39.117 and MAC address 52:54:00:de:c6:59 in network mk-functional-074768
I1027 19:18:34.445509   71602 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/functional-074768/id_rsa Username:docker}
I1027 19:18:34.523558   71602 ssh_runner.go:195] Run: sudo crictl images --output json
E1027 19:18:46.120886   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-074768 image ls --format json --alsologtostderr:
[{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83
d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9a
c2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-074768"],"size":"4943877"},{"id":"c6d2dd4b4f78686779b9eb9e5b0b30bfe6a34da1789c9bdf4e073c1d6d79c6bb","repoDigests":["localhost/my-image@sha256:657ffe0ceef70ac5879abb57439e755edb74c9262fd9b3046491c2b48edbd69a"],"repoTags":["localhost/my-image:functional-074768"],"size":"1468600"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94
cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.3
4.1"],"size":"73138073"},{"id":"0cb0c0fd72b8e02dd1ea3d8e0b459ebfe615690781928063d0fbaef5b208f169","repoDigests":["localhost/minikube-local-cache-test@sha256:2cf913fe464e7a61f689f3a076d2fefc9a42a42365bd1c148d0a7673d8e4c7e7"],"repoTags":["localhost/minikube-local-cache-test:functional-074768"],"size":"3328"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"52b4c6218d1c56e5c7b050d55e1df0318ade980062056b82c1b0260d1daf7caf","repoDigests":["docker.io
/library/d06b2166ad2e36c7107577d585e48e0e43d6dfa333cb4986a2d5c95babf12687-tmp@sha256:9d1515acbaebcdb5c2b0ef7cffc29eec4c1584cb3be591b06f6822e9d5543e1e"],"repoTags":[],"size":"1466018"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-074768 image ls --format json --alsologtostderr:
I1027 19:18:34.240965   71591 out.go:360] Setting OutFile to fd 1 ...
I1027 19:18:34.241236   71591 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:18:34.241245   71591 out.go:374] Setting ErrFile to fd 2...
I1027 19:18:34.241249   71591 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:18:34.241460   71591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-58821/.minikube/bin
I1027 19:18:34.242077   71591 config.go:182] Loaded profile config "functional-074768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 19:18:34.242181   71591 config.go:182] Loaded profile config "functional-074768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 19:18:34.244528   71591 ssh_runner.go:195] Run: systemctl --version
I1027 19:18:34.247137   71591 main.go:141] libmachine: domain functional-074768 has defined MAC address 52:54:00:de:c6:59 in network mk-functional-074768
I1027 19:18:34.247599   71591 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:de:c6:59", ip: ""} in network mk-functional-074768: {Iface:virbr1 ExpiryTime:2025-10-27 20:09:46 +0000 UTC Type:0 Mac:52:54:00:de:c6:59 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:functional-074768 Clientid:01:52:54:00:de:c6:59}
I1027 19:18:34.247629   71591 main.go:141] libmachine: domain functional-074768 has defined IP address 192.168.39.117 and MAC address 52:54:00:de:c6:59 in network mk-functional-074768
I1027 19:18:34.247833   71591 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/functional-074768/id_rsa Username:docker}
I1027 19:18:34.332269   71591 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-074768 image ls --format yaml --alsologtostderr:
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-074768
size: "4943877"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 0cb0c0fd72b8e02dd1ea3d8e0b459ebfe615690781928063d0fbaef5b208f169
repoDigests:
- localhost/minikube-local-cache-test@sha256:2cf913fe464e7a61f689f3a076d2fefc9a42a42365bd1c148d0a7673d8e4c7e7
repoTags:
- localhost/minikube-local-cache-test:functional-074768
size: "3328"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-074768 image ls --format yaml --alsologtostderr:
I1027 19:18:31.265268   71531 out.go:360] Setting OutFile to fd 1 ...
I1027 19:18:31.265505   71531 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:18:31.265514   71531 out.go:374] Setting ErrFile to fd 2...
I1027 19:18:31.265518   71531 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:18:31.265734   71531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-58821/.minikube/bin
I1027 19:18:31.266320   71531 config.go:182] Loaded profile config "functional-074768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 19:18:31.266412   71531 config.go:182] Loaded profile config "functional-074768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 19:18:31.268426   71531 ssh_runner.go:195] Run: systemctl --version
I1027 19:18:31.270511   71531 main.go:141] libmachine: domain functional-074768 has defined MAC address 52:54:00:de:c6:59 in network mk-functional-074768
I1027 19:18:31.271005   71531 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:de:c6:59", ip: ""} in network mk-functional-074768: {Iface:virbr1 ExpiryTime:2025-10-27 20:09:46 +0000 UTC Type:0 Mac:52:54:00:de:c6:59 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:functional-074768 Clientid:01:52:54:00:de:c6:59}
I1027 19:18:31.271052   71531 main.go:141] libmachine: domain functional-074768 has defined IP address 192.168.39.117 and MAC address 52:54:00:de:c6:59 in network mk-functional-074768
I1027 19:18:31.271252   71531 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/functional-074768/id_rsa Username:docker}
I1027 19:18:31.355312   71531 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074768 ssh pgrep buildkitd: exit status 1 (155.103545ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 image build -t localhost/my-image:functional-074768 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-074768 image build -t localhost/my-image:functional-074768 testdata/build --alsologtostderr: (2.420140045s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-074768 image build -t localhost/my-image:functional-074768 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 52b4c6218d1
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-074768
--> c6d2dd4b4f7
Successfully tagged localhost/my-image:functional-074768
c6d2dd4b4f78686779b9eb9e5b0b30bfe6a34da1789c9bdf4e073c1d6d79c6bb
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-074768 image build -t localhost/my-image:functional-074768 testdata/build --alsologtostderr:
I1027 19:18:31.617991   71553 out.go:360] Setting OutFile to fd 1 ...
I1027 19:18:31.618273   71553 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:18:31.618285   71553 out.go:374] Setting ErrFile to fd 2...
I1027 19:18:31.618289   71553 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:18:31.618499   71553 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-58821/.minikube/bin
I1027 19:18:31.619146   71553 config.go:182] Loaded profile config "functional-074768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 19:18:31.619890   71553 config.go:182] Loaded profile config "functional-074768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 19:18:31.622143   71553 ssh_runner.go:195] Run: systemctl --version
I1027 19:18:31.624140   71553 main.go:141] libmachine: domain functional-074768 has defined MAC address 52:54:00:de:c6:59 in network mk-functional-074768
I1027 19:18:31.624521   71553 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:de:c6:59", ip: ""} in network mk-functional-074768: {Iface:virbr1 ExpiryTime:2025-10-27 20:09:46 +0000 UTC Type:0 Mac:52:54:00:de:c6:59 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:functional-074768 Clientid:01:52:54:00:de:c6:59}
I1027 19:18:31.624549   71553 main.go:141] libmachine: domain functional-074768 has defined IP address 192.168.39.117 and MAC address 52:54:00:de:c6:59 in network mk-functional-074768
I1027 19:18:31.624667   71553 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/functional-074768/id_rsa Username:docker}
I1027 19:18:31.703557   71553 build_images.go:161] Building image from path: /tmp/build.1496207409.tar
I1027 19:18:31.703658   71553 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1027 19:18:31.717764   71553 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1496207409.tar
I1027 19:18:31.724124   71553 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1496207409.tar: stat -c "%s %y" /var/lib/minikube/build/build.1496207409.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1496207409.tar': No such file or directory
I1027 19:18:31.724157   71553 ssh_runner.go:362] scp /tmp/build.1496207409.tar --> /var/lib/minikube/build/build.1496207409.tar (3072 bytes)
I1027 19:18:31.758630   71553 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1496207409
I1027 19:18:31.772407   71553 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1496207409 -xf /var/lib/minikube/build/build.1496207409.tar
I1027 19:18:31.784995   71553 crio.go:315] Building image: /var/lib/minikube/build/build.1496207409
I1027 19:18:31.785090   71553 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-074768 /var/lib/minikube/build/build.1496207409 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1027 19:18:33.945630   71553 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-074768 /var/lib/minikube/build/build.1496207409 --cgroup-manager=cgroupfs: (2.160508982s)
I1027 19:18:33.945714   71553 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1496207409
I1027 19:18:33.959720   71553 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1496207409.tar
I1027 19:18:33.973303   71553 build_images.go:217] Built localhost/my-image:functional-074768 from /tmp/build.1496207409.tar
I1027 19:18:33.973347   71553 build_images.go:133] succeeded building to: functional-074768
I1027 19:18:33.973352   71553 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.097478451s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-074768
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "309.233599ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "76.368037ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (37.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-074768 /tmp/TestFunctionalparallelMountCmdany-port2926395070/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761592335039642548" to /tmp/TestFunctionalparallelMountCmdany-port2926395070/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761592335039642548" to /tmp/TestFunctionalparallelMountCmdany-port2926395070/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761592335039642548" to /tmp/TestFunctionalparallelMountCmdany-port2926395070/001/test-1761592335039642548
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074768 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (173.987982ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1027 19:12:15.213947   62705 retry.go:31] will retry after 285.163229ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 27 19:12 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 27 19:12 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 27 19:12 test-1761592335039642548
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 ssh cat /mount-9p/test-1761592335039642548
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-074768 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [c84388b6-2d7c-40a2-b560-fd225b55349a] Pending
helpers_test.go:352: "busybox-mount" [c84388b6-2d7c-40a2-b560-fd225b55349a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [c84388b6-2d7c-40a2-b560-fd225b55349a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [c84388b6-2d7c-40a2-b560-fd225b55349a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 36.004071866s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-074768 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-074768 /tmp/TestFunctionalparallelMountCmdany-port2926395070/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (37.88s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "272.536515ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "70.259332ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 image load --daemon kicbase/echo-server:functional-074768 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-074768 image load --daemon kicbase/echo-server:functional-074768 --alsologtostderr: (1.745712206s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 image load --daemon kicbase/echo-server:functional-074768 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-074768
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 image load --daemon kicbase/echo-server:functional-074768 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 image save kicbase/echo-server:functional-074768 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 image rm kicbase/echo-server:functional-074768 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-074768
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 image save --daemon kicbase/echo-server:functional-074768 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-074768
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-074768 /tmp/TestFunctionalparallelMountCmdspecific-port3318931296/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074768 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (152.68166ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1027 19:12:53.071435   62705 retry.go:31] will retry after 305.459659ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-074768 /tmp/TestFunctionalparallelMountCmdspecific-port3318931296/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074768 ssh "sudo umount -f /mount-9p": exit status 1 (157.972584ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-074768 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-074768 /tmp/TestFunctionalparallelMountCmdspecific-port3318931296/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-074768 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2393103263/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-074768 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2393103263/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-074768 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2393103263/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074768 ssh "findmnt -T" /mount1: exit status 1 (167.833296ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1027 19:12:54.218008   62705 retry.go:31] will retry after 517.195363ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-074768 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-074768 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2393103263/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-074768 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2393103263/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-074768 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2393103263/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-074768 service list: (1.196878447s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-074768 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-074768 service list -o json: (1.195138046s)
functional_test.go:1504: Took "1.195214503s" to run "out/minikube-linux-amd64 -p functional-074768 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.20s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-074768
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-074768
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-074768
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (197.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1027 19:28:46.121567   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-471101 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m17.240530118s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (197.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-471101 kubectl -- rollout status deployment/busybox: (4.185341438s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 kubectl -- exec busybox-7b57f96db7-4qkq4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 kubectl -- exec busybox-7b57f96db7-77tsd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 kubectl -- exec busybox-7b57f96db7-7b6lm -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 kubectl -- exec busybox-7b57f96db7-4qkq4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 kubectl -- exec busybox-7b57f96db7-77tsd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 kubectl -- exec busybox-7b57f96db7-7b6lm -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 kubectl -- exec busybox-7b57f96db7-4qkq4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 kubectl -- exec busybox-7b57f96db7-77tsd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 kubectl -- exec busybox-7b57f96db7-7b6lm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 kubectl -- exec busybox-7b57f96db7-4qkq4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 kubectl -- exec busybox-7b57f96db7-4qkq4 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 kubectl -- exec busybox-7b57f96db7-77tsd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 kubectl -- exec busybox-7b57f96db7-77tsd -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 kubectl -- exec busybox-7b57f96db7-7b6lm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 kubectl -- exec busybox-7b57f96db7-7b6lm -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (45.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-471101 node add --alsologtostderr -v 5: (44.981583009s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (45.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-471101 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 cp testdata/cp-test.txt ha-471101:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 ssh -n ha-471101 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 cp ha-471101:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3032913466/001/cp-test_ha-471101.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 ssh -n ha-471101 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 cp ha-471101:/home/docker/cp-test.txt ha-471101-m02:/home/docker/cp-test_ha-471101_ha-471101-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 ssh -n ha-471101 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 ssh -n ha-471101-m02 "sudo cat /home/docker/cp-test_ha-471101_ha-471101-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 cp ha-471101:/home/docker/cp-test.txt ha-471101-m03:/home/docker/cp-test_ha-471101_ha-471101-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 ssh -n ha-471101 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 ssh -n ha-471101-m03 "sudo cat /home/docker/cp-test_ha-471101_ha-471101-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 cp ha-471101:/home/docker/cp-test.txt ha-471101-m04:/home/docker/cp-test_ha-471101_ha-471101-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 ssh -n ha-471101 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 ssh -n ha-471101-m04 "sudo cat /home/docker/cp-test_ha-471101_ha-471101-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 cp testdata/cp-test.txt ha-471101-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 ssh -n ha-471101-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 cp ha-471101-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3032913466/001/cp-test_ha-471101-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 ssh -n ha-471101-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 cp ha-471101-m02:/home/docker/cp-test.txt ha-471101:/home/docker/cp-test_ha-471101-m02_ha-471101.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 ssh -n ha-471101-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 ssh -n ha-471101 "sudo cat /home/docker/cp-test_ha-471101-m02_ha-471101.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 cp ha-471101-m02:/home/docker/cp-test.txt ha-471101-m03:/home/docker/cp-test_ha-471101-m02_ha-471101-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 ssh -n ha-471101-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 ssh -n ha-471101-m03 "sudo cat /home/docker/cp-test_ha-471101-m02_ha-471101-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 cp ha-471101-m02:/home/docker/cp-test.txt ha-471101-m04:/home/docker/cp-test_ha-471101-m02_ha-471101-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 ssh -n ha-471101-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 ssh -n ha-471101-m04 "sudo cat /home/docker/cp-test_ha-471101-m02_ha-471101-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 cp testdata/cp-test.txt ha-471101-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 ssh -n ha-471101-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 cp ha-471101-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3032913466/001/cp-test_ha-471101-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 ssh -n ha-471101-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 cp ha-471101-m03:/home/docker/cp-test.txt ha-471101:/home/docker/cp-test_ha-471101-m03_ha-471101.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 ssh -n ha-471101-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 ssh -n ha-471101 "sudo cat /home/docker/cp-test_ha-471101-m03_ha-471101.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 cp ha-471101-m03:/home/docker/cp-test.txt ha-471101-m02:/home/docker/cp-test_ha-471101-m03_ha-471101-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 ssh -n ha-471101-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 ssh -n ha-471101-m02 "sudo cat /home/docker/cp-test_ha-471101-m03_ha-471101-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 cp ha-471101-m03:/home/docker/cp-test.txt ha-471101-m04:/home/docker/cp-test_ha-471101-m03_ha-471101-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 ssh -n ha-471101-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 ssh -n ha-471101-m04 "sudo cat /home/docker/cp-test_ha-471101-m03_ha-471101-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 cp testdata/cp-test.txt ha-471101-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 ssh -n ha-471101-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 cp ha-471101-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3032913466/001/cp-test_ha-471101-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 ssh -n ha-471101-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 cp ha-471101-m04:/home/docker/cp-test.txt ha-471101:/home/docker/cp-test_ha-471101-m04_ha-471101.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 ssh -n ha-471101-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 ssh -n ha-471101 "sudo cat /home/docker/cp-test_ha-471101-m04_ha-471101.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 cp ha-471101-m04:/home/docker/cp-test.txt ha-471101-m02:/home/docker/cp-test_ha-471101-m04_ha-471101-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 ssh -n ha-471101-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 ssh -n ha-471101-m02 "sudo cat /home/docker/cp-test_ha-471101-m04_ha-471101-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 cp ha-471101-m04:/home/docker/cp-test.txt ha-471101-m03:/home/docker/cp-test_ha-471101-m04_ha-471101-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 ssh -n ha-471101-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 ssh -n ha-471101-m03 "sudo cat /home/docker/cp-test_ha-471101-m04_ha-471101-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (89.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 node stop m02 --alsologtostderr -v 5
E1027 19:32:16.241651   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/functional-074768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:32:16.248118   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/functional-074768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:32:16.259573   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/functional-074768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:32:16.281014   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/functional-074768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:32:16.322542   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/functional-074768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:32:16.404137   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/functional-074768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:32:16.565549   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/functional-074768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:32:16.887311   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/functional-074768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:32:17.529465   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/functional-074768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:32:18.811146   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/functional-074768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:32:21.373072   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/functional-074768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:32:26.495450   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/functional-074768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:32:36.737784   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/functional-074768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:32:57.219621   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/functional-074768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-471101 node stop m02 --alsologtostderr -v 5: (1m29.220645134s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-471101 status --alsologtostderr -v 5: exit status 7 (552.237904ms)

                                                
                                                
-- stdout --
	ha-471101
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-471101-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-471101-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-471101-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:33:19.357298   76705 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:33:19.357555   76705 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:33:19.357565   76705 out.go:374] Setting ErrFile to fd 2...
	I1027 19:33:19.357569   76705 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:33:19.357771   76705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-58821/.minikube/bin
	I1027 19:33:19.357978   76705 out.go:368] Setting JSON to false
	I1027 19:33:19.358013   76705 mustload.go:65] Loading cluster: ha-471101
	I1027 19:33:19.358151   76705 notify.go:220] Checking for updates...
	I1027 19:33:19.358384   76705 config.go:182] Loaded profile config "ha-471101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:33:19.358402   76705 status.go:174] checking status of ha-471101 ...
	I1027 19:33:19.360852   76705 status.go:371] ha-471101 host status = "Running" (err=<nil>)
	I1027 19:33:19.360880   76705 host.go:66] Checking if "ha-471101" exists ...
	I1027 19:33:19.363682   76705 main.go:141] libmachine: domain ha-471101 has defined MAC address 52:54:00:2e:3c:ac in network mk-ha-471101
	I1027 19:33:19.364227   76705 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:3c:ac", ip: ""} in network mk-ha-471101: {Iface:virbr1 ExpiryTime:2025-10-27 20:27:43 +0000 UTC Type:0 Mac:52:54:00:2e:3c:ac Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-471101 Clientid:01:52:54:00:2e:3c:ac}
	I1027 19:33:19.364263   76705 main.go:141] libmachine: domain ha-471101 has defined IP address 192.168.39.134 and MAC address 52:54:00:2e:3c:ac in network mk-ha-471101
	I1027 19:33:19.364404   76705 host.go:66] Checking if "ha-471101" exists ...
	I1027 19:33:19.364624   76705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:33:19.367717   76705 main.go:141] libmachine: domain ha-471101 has defined MAC address 52:54:00:2e:3c:ac in network mk-ha-471101
	I1027 19:33:19.368238   76705 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:3c:ac", ip: ""} in network mk-ha-471101: {Iface:virbr1 ExpiryTime:2025-10-27 20:27:43 +0000 UTC Type:0 Mac:52:54:00:2e:3c:ac Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-471101 Clientid:01:52:54:00:2e:3c:ac}
	I1027 19:33:19.368311   76705 main.go:141] libmachine: domain ha-471101 has defined IP address 192.168.39.134 and MAC address 52:54:00:2e:3c:ac in network mk-ha-471101
	I1027 19:33:19.368485   76705 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/ha-471101/id_rsa Username:docker}
	I1027 19:33:19.465637   76705 ssh_runner.go:195] Run: systemctl --version
	I1027 19:33:19.473341   76705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:33:19.494225   76705 kubeconfig.go:125] found "ha-471101" server: "https://192.168.39.254:8443"
	I1027 19:33:19.494269   76705 api_server.go:166] Checking apiserver status ...
	I1027 19:33:19.494316   76705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:33:19.519070   76705 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1400/cgroup
	W1027 19:33:19.533737   76705 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1400/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1027 19:33:19.533813   76705 ssh_runner.go:195] Run: ls
	I1027 19:33:19.539305   76705 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1027 19:33:19.544940   76705 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1027 19:33:19.544970   76705 status.go:463] ha-471101 apiserver status = Running (err=<nil>)
	I1027 19:33:19.544980   76705 status.go:176] ha-471101 status: &{Name:ha-471101 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 19:33:19.544998   76705 status.go:174] checking status of ha-471101-m02 ...
	I1027 19:33:19.546715   76705 status.go:371] ha-471101-m02 host status = "Stopped" (err=<nil>)
	I1027 19:33:19.546732   76705 status.go:384] host is not running, skipping remaining checks
	I1027 19:33:19.546737   76705 status.go:176] ha-471101-m02 status: &{Name:ha-471101-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 19:33:19.546752   76705 status.go:174] checking status of ha-471101-m03 ...
	I1027 19:33:19.548128   76705 status.go:371] ha-471101-m03 host status = "Running" (err=<nil>)
	I1027 19:33:19.548146   76705 host.go:66] Checking if "ha-471101-m03" exists ...
	I1027 19:33:19.550938   76705 main.go:141] libmachine: domain ha-471101-m03 has defined MAC address 52:54:00:89:4f:af in network mk-ha-471101
	I1027 19:33:19.551415   76705 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:89:4f:af", ip: ""} in network mk-ha-471101: {Iface:virbr1 ExpiryTime:2025-10-27 20:29:44 +0000 UTC Type:0 Mac:52:54:00:89:4f:af Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-471101-m03 Clientid:01:52:54:00:89:4f:af}
	I1027 19:33:19.551458   76705 main.go:141] libmachine: domain ha-471101-m03 has defined IP address 192.168.39.189 and MAC address 52:54:00:89:4f:af in network mk-ha-471101
	I1027 19:33:19.551605   76705 host.go:66] Checking if "ha-471101-m03" exists ...
	I1027 19:33:19.551906   76705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:33:19.554186   76705 main.go:141] libmachine: domain ha-471101-m03 has defined MAC address 52:54:00:89:4f:af in network mk-ha-471101
	I1027 19:33:19.554648   76705 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:89:4f:af", ip: ""} in network mk-ha-471101: {Iface:virbr1 ExpiryTime:2025-10-27 20:29:44 +0000 UTC Type:0 Mac:52:54:00:89:4f:af Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-471101-m03 Clientid:01:52:54:00:89:4f:af}
	I1027 19:33:19.554677   76705 main.go:141] libmachine: domain ha-471101-m03 has defined IP address 192.168.39.189 and MAC address 52:54:00:89:4f:af in network mk-ha-471101
	I1027 19:33:19.554914   76705 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/ha-471101-m03/id_rsa Username:docker}
	I1027 19:33:19.645623   76705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:33:19.671981   76705 kubeconfig.go:125] found "ha-471101" server: "https://192.168.39.254:8443"
	I1027 19:33:19.672011   76705 api_server.go:166] Checking apiserver status ...
	I1027 19:33:19.672061   76705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:33:19.694272   76705 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1750/cgroup
	W1027 19:33:19.711189   76705 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1750/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1027 19:33:19.711265   76705 ssh_runner.go:195] Run: ls
	I1027 19:33:19.717580   76705 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1027 19:33:19.722799   76705 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1027 19:33:19.722839   76705 status.go:463] ha-471101-m03 apiserver status = Running (err=<nil>)
	I1027 19:33:19.722853   76705 status.go:176] ha-471101-m03 status: &{Name:ha-471101-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 19:33:19.722872   76705 status.go:174] checking status of ha-471101-m04 ...
	I1027 19:33:19.724592   76705 status.go:371] ha-471101-m04 host status = "Running" (err=<nil>)
	I1027 19:33:19.724612   76705 host.go:66] Checking if "ha-471101-m04" exists ...
	I1027 19:33:19.727643   76705 main.go:141] libmachine: domain ha-471101-m04 has defined MAC address 52:54:00:e8:bc:f8 in network mk-ha-471101
	I1027 19:33:19.728124   76705 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:bc:f8", ip: ""} in network mk-ha-471101: {Iface:virbr1 ExpiryTime:2025-10-27 20:31:09 +0000 UTC Type:0 Mac:52:54:00:e8:bc:f8 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-471101-m04 Clientid:01:52:54:00:e8:bc:f8}
	I1027 19:33:19.728150   76705 main.go:141] libmachine: domain ha-471101-m04 has defined IP address 192.168.39.116 and MAC address 52:54:00:e8:bc:f8 in network mk-ha-471101
	I1027 19:33:19.728322   76705 host.go:66] Checking if "ha-471101-m04" exists ...
	I1027 19:33:19.728527   76705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:33:19.730736   76705 main.go:141] libmachine: domain ha-471101-m04 has defined MAC address 52:54:00:e8:bc:f8 in network mk-ha-471101
	I1027 19:33:19.731338   76705 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:bc:f8", ip: ""} in network mk-ha-471101: {Iface:virbr1 ExpiryTime:2025-10-27 20:31:09 +0000 UTC Type:0 Mac:52:54:00:e8:bc:f8 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-471101-m04 Clientid:01:52:54:00:e8:bc:f8}
	I1027 19:33:19.731386   76705 main.go:141] libmachine: domain ha-471101-m04 has defined IP address 192.168.39.116 and MAC address 52:54:00:e8:bc:f8 in network mk-ha-471101
	I1027 19:33:19.731559   76705 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/ha-471101-m04/id_rsa Username:docker}
	I1027 19:33:19.823841   76705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:33:19.844254   76705 status.go:176] ha-471101-m04 status: &{Name:ha-471101-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (89.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (51.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 node start m02 --alsologtostderr -v 5
E1027 19:33:38.182810   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/functional-074768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:33:46.120883   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-471101 node start m02 --alsologtostderr -v 5: (50.433763778s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (51.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (393.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 stop --alsologtostderr -v 5
E1027 19:35:00.105213   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/functional-074768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:37:16.241626   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/functional-074768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:37:43.952494   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/functional-074768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-471101 stop --alsologtostderr -v 5: (4m20.770690707s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 start --wait true --alsologtostderr -v 5
E1027 19:38:46.121992   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-471101 start --wait true --alsologtostderr -v 5: (2m12.436765557s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (393.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-471101 node delete m03 --alsologtostderr -v 5: (17.414382926s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (245.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 stop --alsologtostderr -v 5
E1027 19:41:49.199744   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:42:16.241564   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/functional-074768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:43:46.121231   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-471101 stop --alsologtostderr -v 5: (4m5.142724978s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-471101 status --alsologtostderr -v 5: exit status 7 (65.487605ms)

                                                
                                                
-- stdout --
	ha-471101
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-471101-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-471101-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:45:09.705810   80053 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:45:09.706079   80053 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:45:09.706089   80053 out.go:374] Setting ErrFile to fd 2...
	I1027 19:45:09.706093   80053 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:45:09.706318   80053 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-58821/.minikube/bin
	I1027 19:45:09.706511   80053 out.go:368] Setting JSON to false
	I1027 19:45:09.706550   80053 mustload.go:65] Loading cluster: ha-471101
	I1027 19:45:09.706674   80053 notify.go:220] Checking for updates...
	I1027 19:45:09.707134   80053 config.go:182] Loaded profile config "ha-471101": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:45:09.707156   80053 status.go:174] checking status of ha-471101 ...
	I1027 19:45:09.709195   80053 status.go:371] ha-471101 host status = "Stopped" (err=<nil>)
	I1027 19:45:09.709211   80053 status.go:384] host is not running, skipping remaining checks
	I1027 19:45:09.709216   80053 status.go:176] ha-471101 status: &{Name:ha-471101 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 19:45:09.709241   80053 status.go:174] checking status of ha-471101-m02 ...
	I1027 19:45:09.710312   80053 status.go:371] ha-471101-m02 host status = "Stopped" (err=<nil>)
	I1027 19:45:09.710326   80053 status.go:384] host is not running, skipping remaining checks
	I1027 19:45:09.710330   80053 status.go:176] ha-471101-m02 status: &{Name:ha-471101-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 19:45:09.710341   80053 status.go:174] checking status of ha-471101-m04 ...
	I1027 19:45:09.711320   80053 status.go:371] ha-471101-m04 host status = "Stopped" (err=<nil>)
	I1027 19:45:09.711343   80053 status.go:384] host is not running, skipping remaining checks
	I1027 19:45:09.711348   80053 status.go:176] ha-471101-m04 status: &{Name:ha-471101-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (245.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (99.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-471101 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m38.357159421s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (99.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (80.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 node add --control-plane --alsologtostderr -v 5
E1027 19:47:16.242446   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/functional-074768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-471101 node add --control-plane --alsologtostderr -v 5: (1m19.740318443s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-471101 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (80.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.73s)

                                                
                                    
x
+
TestJSONOutput/start/Command (82.69s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-613569 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1027 19:48:39.316979   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/functional-074768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:48:46.120727   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-613569 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m22.685023519s)
--- PASS: TestJSONOutput/start/Command (82.69s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.79s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-613569 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.79s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-613569 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.32s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-613569 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-613569 --output=json --user=testUser: (8.321338338s)
--- PASS: TestJSONOutput/stop/Command (8.32s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-624389 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-624389 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (85.12698ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8b475d46-9095-43a7-bd8b-5a622ddde994","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-624389] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b85c4c53-e9ff-410a-a42b-9bfab7a60fb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21801"}}
	{"specversion":"1.0","id":"056abd1f-51bf-472e-86be-38470509accb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2802829b-308d-46bd-92df-6bf334f12ff9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21801-58821/kubeconfig"}}
	{"specversion":"1.0","id":"91f1a845-36b9-4a8b-ac5a-e25b0ce1def6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-58821/.minikube"}}
	{"specversion":"1.0","id":"72ac578b-2772-43af-afee-cee1b9461ff7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8139e721-1c75-43ed-b4dd-45647ab4cc87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"704ef121-8ba9-4057-9390-1291fa115009","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-624389" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-624389
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (87.93s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-484552 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-484552 --driver=kvm2  --container-runtime=crio: (43.021619587s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-487133 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-487133 --driver=kvm2  --container-runtime=crio: (42.15766574s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-484552
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-487133
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-487133" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-487133
helpers_test.go:175: Cleaning up "first-484552" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-484552
--- PASS: TestMinikubeProfile (87.93s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (21.33s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-631952 --memory=3072 --mount-string /tmp/TestMountStartserial2481026656/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-631952 --memory=3072 --mount-string /tmp/TestMountStartserial2481026656/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (20.331898474s)
--- PASS: TestMountStart/serial/StartWithMountFirst (21.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-631952 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-631952 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (21.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-750219 --memory=3072 --mount-string /tmp/TestMountStartserial2481026656/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-750219 --memory=3072 --mount-string /tmp/TestMountStartserial2481026656/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (20.830902852s)
--- PASS: TestMountStart/serial/StartWithMountSecond (21.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-750219 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-750219 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-631952 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-750219 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-750219 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-750219
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-750219: (1.315158811s)
--- PASS: TestMountStart/serial/Stop (1.32s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (17.85s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-750219
E1027 19:52:16.242016   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/functional-074768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-750219: (16.846402758s)
--- PASS: TestMountStart/serial/RestartStopped (17.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-750219 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-750219 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (102.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-449598 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1027 19:53:46.121522   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-449598 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m41.96048914s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (102.31s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-449598 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-449598 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-449598 -- rollout status deployment/busybox: (3.702736927s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-449598 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-449598 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-449598 -- exec busybox-7b57f96db7-l54xp -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-449598 -- exec busybox-7b57f96db7-mmzmd -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-449598 -- exec busybox-7b57f96db7-l54xp -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-449598 -- exec busybox-7b57f96db7-mmzmd -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-449598 -- exec busybox-7b57f96db7-l54xp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-449598 -- exec busybox-7b57f96db7-mmzmd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.32s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-449598 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-449598 -- exec busybox-7b57f96db7-l54xp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-449598 -- exec busybox-7b57f96db7-l54xp -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-449598 -- exec busybox-7b57f96db7-mmzmd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-449598 -- exec busybox-7b57f96db7-mmzmd -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-449598 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-449598 -v=5 --alsologtostderr: (43.270030223s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.74s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-449598 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.47s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 cp testdata/cp-test.txt multinode-449598:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 ssh -n multinode-449598 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 cp multinode-449598:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2257688733/001/cp-test_multinode-449598.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 ssh -n multinode-449598 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 cp multinode-449598:/home/docker/cp-test.txt multinode-449598-m02:/home/docker/cp-test_multinode-449598_multinode-449598-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 ssh -n multinode-449598 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 ssh -n multinode-449598-m02 "sudo cat /home/docker/cp-test_multinode-449598_multinode-449598-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 cp multinode-449598:/home/docker/cp-test.txt multinode-449598-m03:/home/docker/cp-test_multinode-449598_multinode-449598-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 ssh -n multinode-449598 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 ssh -n multinode-449598-m03 "sudo cat /home/docker/cp-test_multinode-449598_multinode-449598-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 cp testdata/cp-test.txt multinode-449598-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 ssh -n multinode-449598-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 cp multinode-449598-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2257688733/001/cp-test_multinode-449598-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 ssh -n multinode-449598-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 cp multinode-449598-m02:/home/docker/cp-test.txt multinode-449598:/home/docker/cp-test_multinode-449598-m02_multinode-449598.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 ssh -n multinode-449598-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 ssh -n multinode-449598 "sudo cat /home/docker/cp-test_multinode-449598-m02_multinode-449598.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 cp multinode-449598-m02:/home/docker/cp-test.txt multinode-449598-m03:/home/docker/cp-test_multinode-449598-m02_multinode-449598-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 ssh -n multinode-449598-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 ssh -n multinode-449598-m03 "sudo cat /home/docker/cp-test_multinode-449598-m02_multinode-449598-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 cp testdata/cp-test.txt multinode-449598-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 ssh -n multinode-449598-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 cp multinode-449598-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2257688733/001/cp-test_multinode-449598-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 ssh -n multinode-449598-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 cp multinode-449598-m03:/home/docker/cp-test.txt multinode-449598:/home/docker/cp-test_multinode-449598-m03_multinode-449598.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 ssh -n multinode-449598-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 ssh -n multinode-449598 "sudo cat /home/docker/cp-test_multinode-449598-m03_multinode-449598.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 cp multinode-449598-m03:/home/docker/cp-test.txt multinode-449598-m02:/home/docker/cp-test_multinode-449598-m03_multinode-449598-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 ssh -n multinode-449598-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 ssh -n multinode-449598-m02 "sudo cat /home/docker/cp-test_multinode-449598-m03_multinode-449598-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.09s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-449598 node stop m03: (1.680467638s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-449598 status: exit status 7 (345.802857ms)

                                                
                                                
-- stdout --
	multinode-449598
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-449598-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-449598-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-449598 status --alsologtostderr: exit status 7 (349.47074ms)

                                                
                                                
-- stdout --
	multinode-449598
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-449598-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-449598-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:55:01.039087   85672 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:55:01.039322   85672 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:55:01.039331   85672 out.go:374] Setting ErrFile to fd 2...
	I1027 19:55:01.039335   85672 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:55:01.039525   85672 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-58821/.minikube/bin
	I1027 19:55:01.039689   85672 out.go:368] Setting JSON to false
	I1027 19:55:01.039723   85672 mustload.go:65] Loading cluster: multinode-449598
	I1027 19:55:01.039772   85672 notify.go:220] Checking for updates...
	I1027 19:55:01.040184   85672 config.go:182] Loaded profile config "multinode-449598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:55:01.040202   85672 status.go:174] checking status of multinode-449598 ...
	I1027 19:55:01.042431   85672 status.go:371] multinode-449598 host status = "Running" (err=<nil>)
	I1027 19:55:01.042454   85672 host.go:66] Checking if "multinode-449598" exists ...
	I1027 19:55:01.045030   85672 main.go:141] libmachine: domain multinode-449598 has defined MAC address 52:54:00:87:8d:93 in network mk-multinode-449598
	I1027 19:55:01.045475   85672 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:87:8d:93", ip: ""} in network mk-multinode-449598: {Iface:virbr1 ExpiryTime:2025-10-27 20:52:36 +0000 UTC Type:0 Mac:52:54:00:87:8d:93 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:multinode-449598 Clientid:01:52:54:00:87:8d:93}
	I1027 19:55:01.045506   85672 main.go:141] libmachine: domain multinode-449598 has defined IP address 192.168.39.145 and MAC address 52:54:00:87:8d:93 in network mk-multinode-449598
	I1027 19:55:01.045695   85672 host.go:66] Checking if "multinode-449598" exists ...
	I1027 19:55:01.045904   85672 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:55:01.048079   85672 main.go:141] libmachine: domain multinode-449598 has defined MAC address 52:54:00:87:8d:93 in network mk-multinode-449598
	I1027 19:55:01.048528   85672 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:87:8d:93", ip: ""} in network mk-multinode-449598: {Iface:virbr1 ExpiryTime:2025-10-27 20:52:36 +0000 UTC Type:0 Mac:52:54:00:87:8d:93 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:multinode-449598 Clientid:01:52:54:00:87:8d:93}
	I1027 19:55:01.048553   85672 main.go:141] libmachine: domain multinode-449598 has defined IP address 192.168.39.145 and MAC address 52:54:00:87:8d:93 in network mk-multinode-449598
	I1027 19:55:01.048724   85672 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/multinode-449598/id_rsa Username:docker}
	I1027 19:55:01.138103   85672 ssh_runner.go:195] Run: systemctl --version
	I1027 19:55:01.145717   85672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:55:01.165167   85672 kubeconfig.go:125] found "multinode-449598" server: "https://192.168.39.145:8443"
	I1027 19:55:01.165203   85672 api_server.go:166] Checking apiserver status ...
	I1027 19:55:01.165270   85672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:55:01.187802   85672 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup
	W1027 19:55:01.200790   85672 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1027 19:55:01.200877   85672 ssh_runner.go:195] Run: ls
	I1027 19:55:01.206675   85672 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8443/healthz ...
	I1027 19:55:01.212421   85672 api_server.go:279] https://192.168.39.145:8443/healthz returned 200:
	ok
	I1027 19:55:01.212448   85672 status.go:463] multinode-449598 apiserver status = Running (err=<nil>)
	I1027 19:55:01.212459   85672 status.go:176] multinode-449598 status: &{Name:multinode-449598 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 19:55:01.212475   85672 status.go:174] checking status of multinode-449598-m02 ...
	I1027 19:55:01.214420   85672 status.go:371] multinode-449598-m02 host status = "Running" (err=<nil>)
	I1027 19:55:01.214458   85672 host.go:66] Checking if "multinode-449598-m02" exists ...
	I1027 19:55:01.217729   85672 main.go:141] libmachine: domain multinode-449598-m02 has defined MAC address 52:54:00:68:4e:81 in network mk-multinode-449598
	I1027 19:55:01.218333   85672 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:68:4e:81", ip: ""} in network mk-multinode-449598: {Iface:virbr1 ExpiryTime:2025-10-27 20:53:33 +0000 UTC Type:0 Mac:52:54:00:68:4e:81 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-449598-m02 Clientid:01:52:54:00:68:4e:81}
	I1027 19:55:01.218370   85672 main.go:141] libmachine: domain multinode-449598-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:68:4e:81 in network mk-multinode-449598
	I1027 19:55:01.218542   85672 host.go:66] Checking if "multinode-449598-m02" exists ...
	I1027 19:55:01.218769   85672 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:55:01.221302   85672 main.go:141] libmachine: domain multinode-449598-m02 has defined MAC address 52:54:00:68:4e:81 in network mk-multinode-449598
	I1027 19:55:01.221824   85672 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:68:4e:81", ip: ""} in network mk-multinode-449598: {Iface:virbr1 ExpiryTime:2025-10-27 20:53:33 +0000 UTC Type:0 Mac:52:54:00:68:4e:81 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-449598-m02 Clientid:01:52:54:00:68:4e:81}
	I1027 19:55:01.221852   85672 main.go:141] libmachine: domain multinode-449598-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:68:4e:81 in network mk-multinode-449598
	I1027 19:55:01.222029   85672 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21801-58821/.minikube/machines/multinode-449598-m02/id_rsa Username:docker}
	I1027 19:55:01.307639   85672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:55:01.326339   85672 status.go:176] multinode-449598-m02 status: &{Name:multinode-449598-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1027 19:55:01.326422   85672 status.go:174] checking status of multinode-449598-m03 ...
	I1027 19:55:01.328163   85672 status.go:371] multinode-449598-m03 host status = "Stopped" (err=<nil>)
	I1027 19:55:01.328186   85672 status.go:384] host is not running, skipping remaining checks
	I1027 19:55:01.328194   85672 status.go:176] multinode-449598-m03 status: &{Name:multinode-449598-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.38s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (44.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-449598 node start m03 -v=5 --alsologtostderr: (43.672837245s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (44.20s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (302.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-449598
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-449598
E1027 19:57:16.241940   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/functional-074768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:58:29.203705   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-449598: (2m56.354444685s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-449598 --wait=true -v=5 --alsologtostderr
E1027 19:58:46.123180   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-449598 --wait=true -v=5 --alsologtostderr: (2m6.444384237s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-449598
--- PASS: TestMultiNode/serial/RestartKeepsNodes (302.93s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-449598 node delete m03: (2.268385926s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.75s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (148.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 stop
E1027 20:02:16.241571   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/functional-074768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-449598 stop: (2m28.76147756s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-449598 status: exit status 7 (66.152094ms)

                                                
                                                
-- stdout --
	multinode-449598
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-449598-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-449598 status --alsologtostderr: exit status 7 (61.956492ms)

                                                
                                                
-- stdout --
	multinode-449598
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-449598-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 20:03:20.091430   88041 out.go:360] Setting OutFile to fd 1 ...
	I1027 20:03:20.091640   88041 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:03:20.091647   88041 out.go:374] Setting ErrFile to fd 2...
	I1027 20:03:20.091652   88041 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:03:20.091854   88041 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-58821/.minikube/bin
	I1027 20:03:20.092018   88041 out.go:368] Setting JSON to false
	I1027 20:03:20.092068   88041 mustload.go:65] Loading cluster: multinode-449598
	I1027 20:03:20.092184   88041 notify.go:220] Checking for updates...
	I1027 20:03:20.092489   88041 config.go:182] Loaded profile config "multinode-449598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:03:20.092506   88041 status.go:174] checking status of multinode-449598 ...
	I1027 20:03:20.094552   88041 status.go:371] multinode-449598 host status = "Stopped" (err=<nil>)
	I1027 20:03:20.094575   88041 status.go:384] host is not running, skipping remaining checks
	I1027 20:03:20.094582   88041 status.go:176] multinode-449598 status: &{Name:multinode-449598 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 20:03:20.094616   88041 status.go:174] checking status of multinode-449598-m02 ...
	I1027 20:03:20.095927   88041 status.go:371] multinode-449598-m02 host status = "Stopped" (err=<nil>)
	I1027 20:03:20.095942   88041 status.go:384] host is not running, skipping remaining checks
	I1027 20:03:20.095946   88041 status.go:176] multinode-449598-m02 status: &{Name:multinode-449598-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (148.89s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (87.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-449598 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1027 20:03:46.121626   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-449598 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m27.454889185s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-449598 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (87.92s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-449598
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-449598-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-449598-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (87.001473ms)

                                                
                                                
-- stdout --
	* [multinode-449598-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21801
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21801-58821/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-58821/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-449598-m02' is duplicated with machine name 'multinode-449598-m02' in profile 'multinode-449598'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-449598-m03 --driver=kvm2  --container-runtime=crio
E1027 20:05:19.321318   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/functional-074768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-449598-m03 --driver=kvm2  --container-runtime=crio: (43.809947143s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-449598
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-449598: exit status 80 (213.98261ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-449598 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-449598-m03 already exists in multinode-449598-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-449598-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.02s)

                                                
                                    
x
+
TestScheduledStopUnix (112.63s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-391534 --memory=3072 --driver=kvm2  --container-runtime=crio
E1027 20:08:46.127617   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-391534 --memory=3072 --driver=kvm2  --container-runtime=crio: (40.953524313s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-391534 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-391534 -n scheduled-stop-391534
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-391534 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1027 20:08:57.923136   62705 retry.go:31] will retry after 97.215µs: open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/scheduled-stop-391534/pid: no such file or directory
I1027 20:08:57.924319   62705 retry.go:31] will retry after 160.849µs: open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/scheduled-stop-391534/pid: no such file or directory
I1027 20:08:57.925479   62705 retry.go:31] will retry after 263.46µs: open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/scheduled-stop-391534/pid: no such file or directory
I1027 20:08:57.926615   62705 retry.go:31] will retry after 478.887µs: open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/scheduled-stop-391534/pid: no such file or directory
I1027 20:08:57.927752   62705 retry.go:31] will retry after 696.858µs: open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/scheduled-stop-391534/pid: no such file or directory
I1027 20:08:57.928881   62705 retry.go:31] will retry after 532.786µs: open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/scheduled-stop-391534/pid: no such file or directory
I1027 20:08:57.930021   62705 retry.go:31] will retry after 880.519µs: open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/scheduled-stop-391534/pid: no such file or directory
I1027 20:08:57.931250   62705 retry.go:31] will retry after 1.431987ms: open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/scheduled-stop-391534/pid: no such file or directory
I1027 20:08:57.933494   62705 retry.go:31] will retry after 2.483629ms: open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/scheduled-stop-391534/pid: no such file or directory
I1027 20:08:57.936685   62705 retry.go:31] will retry after 2.303337ms: open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/scheduled-stop-391534/pid: no such file or directory
I1027 20:08:57.939909   62705 retry.go:31] will retry after 6.10321ms: open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/scheduled-stop-391534/pid: no such file or directory
I1027 20:08:57.947114   62705 retry.go:31] will retry after 9.134467ms: open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/scheduled-stop-391534/pid: no such file or directory
I1027 20:08:57.957390   62705 retry.go:31] will retry after 12.817224ms: open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/scheduled-stop-391534/pid: no such file or directory
I1027 20:08:57.970700   62705 retry.go:31] will retry after 25.140488ms: open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/scheduled-stop-391534/pid: no such file or directory
I1027 20:08:57.996994   62705 retry.go:31] will retry after 18.633472ms: open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/scheduled-stop-391534/pid: no such file or directory
I1027 20:08:58.016324   62705 retry.go:31] will retry after 29.47201ms: open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/scheduled-stop-391534/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-391534 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-391534 -n scheduled-stop-391534
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-391534
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-391534 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-391534
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-391534: exit status 7 (62.483594ms)

                                                
                                                
-- stdout --
	scheduled-stop-391534
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-391534 -n scheduled-stop-391534
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-391534 -n scheduled-stop-391534: exit status 7 (60.983927ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-391534" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-391534
--- PASS: TestScheduledStopUnix (112.63s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (159.88s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1012237925 start -p running-upgrade-511686 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1012237925 start -p running-upgrade-511686 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m45.802844481s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-511686 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-511686 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (52.588523248s)
helpers_test.go:175: Cleaning up "running-upgrade-511686" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-511686
--- PASS: TestRunningBinaryUpgrade (159.88s)

                                                
                                    
x
+
TestKubernetesUpgrade (165.71s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-176362 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-176362 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m4.524465943s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-176362
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-176362: (1.910515339s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-176362 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-176362 status --format={{.Host}}: exit status 7 (62.810928ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-176362 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1027 20:15:09.205556   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-176362 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.044599464s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-176362 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-176362 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-176362 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (82.104697ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-176362] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21801
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21801-58821/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-58821/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-176362
	    minikube start -p kubernetes-upgrade-176362 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1763622 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-176362 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-176362 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-176362 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (58.053527627s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-176362" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-176362
--- PASS: TestKubernetesUpgrade (165.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-421237 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-421237 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (92.399135ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-421237] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21801
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21801-58821/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-58821/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (85.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-421237 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-421237 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m24.974618766s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-421237 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (85.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (32.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-421237 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-421237 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (31.588892598s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-421237 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-421237 status -o json: exit status 2 (255.244723ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-421237","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-421237
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (32.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (47.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-421237 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1027 20:12:16.241567   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/functional-074768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-421237 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (47.235974767s)
--- PASS: TestNoKubernetes/serial/Start (47.24s)

                                                
                                    
x
+
TestPause/serial/Start (82.21s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-145997 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-145997 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m22.210026663s)
--- PASS: TestPause/serial/Start (82.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-421237 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-421237 "sudo systemctl is-active --quiet service kubelet": exit status 1 (169.317567ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-421237
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-421237: (1.384546111s)
--- PASS: TestNoKubernetes/serial/Stop (1.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (60.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-421237 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-421237 --driver=kvm2  --container-runtime=crio: (1m0.010253235s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (60.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-764820 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-764820 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (129.312631ms)

                                                
                                                
-- stdout --
	* [false-764820] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21801
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21801-58821/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-58821/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 20:13:41.392277   94497 out.go:360] Setting OutFile to fd 1 ...
	I1027 20:13:41.392573   94497 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:13:41.392589   94497 out.go:374] Setting ErrFile to fd 2...
	I1027 20:13:41.392597   94497 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:13:41.392881   94497 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-58821/.minikube/bin
	I1027 20:13:41.393394   94497 out.go:368] Setting JSON to false
	I1027 20:13:41.394328   94497 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":10571,"bootTime":1761585450,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 20:13:41.394426   94497 start.go:141] virtualization: kvm guest
	I1027 20:13:41.396481   94497 out.go:179] * [false-764820] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 20:13:41.397731   94497 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 20:13:41.397724   94497 notify.go:220] Checking for updates...
	I1027 20:13:41.398790   94497 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 20:13:41.399954   94497 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-58821/kubeconfig
	I1027 20:13:41.401135   94497 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-58821/.minikube
	I1027 20:13:41.402118   94497 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 20:13:41.403149   94497 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 20:13:41.404694   94497 config.go:182] Loaded profile config "NoKubernetes-421237": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1027 20:13:41.404782   94497 config.go:182] Loaded profile config "cert-expiration-888375": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:13:41.404902   94497 config.go:182] Loaded profile config "pause-145997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:13:41.405001   94497 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 20:13:41.439722   94497 out.go:179] * Using the kvm2 driver based on user configuration
	I1027 20:13:41.440795   94497 start.go:305] selected driver: kvm2
	I1027 20:13:41.440811   94497 start.go:925] validating driver "kvm2" against <nil>
	I1027 20:13:41.440822   94497 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 20:13:41.442835   94497 out.go:203] 
	W1027 20:13:41.444094   94497 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1027 20:13:41.445250   94497 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-764820 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-764820

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-764820

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-764820

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-764820

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-764820

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-764820

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-764820

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-764820

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-764820

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-764820

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764820"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764820"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764820"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-764820

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764820"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764820"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-764820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-764820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-764820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-764820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-764820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-764820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-764820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-764820" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764820"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764820"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764820"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764820"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764820"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-764820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-764820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-764820" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764820"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764820"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764820"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764820"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764820"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21801-58821/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Oct 2025 20:12:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.87:8443
name: cert-expiration-888375
contexts:
- context:
cluster: cert-expiration-888375
extensions:
- extension:
last-update: Mon, 27 Oct 2025 20:12:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-888375
name: cert-expiration-888375
current-context: ""
kind: Config
users:
- name: cert-expiration-888375
user:
client-certificate: /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/cert-expiration-888375/client.crt
client-key: /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/cert-expiration-888375/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-764820

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764820"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764820"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764820"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764820"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764820"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764820"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764820"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764820"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764820"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764820"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764820"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764820"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764820"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764820"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764820"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764820"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764820"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764820"

                                                
                                                
----------------------- debugLogs end: false-764820 [took: 4.650409897s] --------------------------------
helpers_test.go:175: Cleaning up "false-764820" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-764820
E1027 20:13:46.120880   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/false (4.97s)

                                                
                                    
x
+
TestISOImage/Setup (27.98s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:45: (dbg) Run:  out/minikube-linux-amd64 start -p guest-291039 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:45: (dbg) Done: out/minikube-linux-amd64 start -p guest-291039 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.981301427s)
--- PASS: TestISOImage/Setup (27.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-421237 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-421237 "sudo systemctl is-active --quiet service kubelet": exit status 1 (179.855932ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:74: (dbg) Run:  out/minikube-linux-amd64 -p guest-291039 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:74: (dbg) Run:  out/minikube-linux-amd64 -p guest-291039 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:74: (dbg) Run:  out/minikube-linux-amd64 -p guest-291039 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:74: (dbg) Run:  out/minikube-linux-amd64 -p guest-291039 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:74: (dbg) Run:  out/minikube-linux-amd64 -p guest-291039 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:74: (dbg) Run:  out/minikube-linux-amd64 -p guest-291039 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:74: (dbg) Run:  out/minikube-linux-amd64 -p guest-291039 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:74: (dbg) Run:  out/minikube-linux-amd64 -p guest-291039 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:74: (dbg) Run:  out/minikube-linux-amd64 -p guest-291039 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:74: (dbg) Run:  out/minikube-linux-amd64 -p guest-291039 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:74: (dbg) Run:  out/minikube-linux-amd64 -p guest-291039 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (127.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1870287214 start -p stopped-upgrade-246578 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1870287214 start -p stopped-upgrade-246578 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m11.894732034s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1870287214 -p stopped-upgrade-246578 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1870287214 -p stopped-upgrade-246578 stop: (1.792195435s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-246578 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-246578 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (53.434036509s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (127.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (100.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-185510 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-185510 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m40.504024259s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (100.50s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-246578
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-246578: (1.151476451s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (106.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-080015 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-080015 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m46.345278724s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (106.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (100.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-078387 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-078387 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m40.056913985s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (100.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (121.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-463502 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-463502 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (2m1.304951872s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (121.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-185510 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [85e0598b-2964-4af7-ae08-7035ed2cc485] Pending
helpers_test.go:352: "busybox" [85e0598b-2964-4af7-ae08-7035ed2cc485] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [85e0598b-2964-4af7-ae08-7035ed2cc485] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003632859s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-185510 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-185510 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-185510 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.234571603s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-185510 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (85.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-185510 --alsologtostderr -v=3
E1027 20:17:16.242098   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/functional-074768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-185510 --alsologtostderr -v=3: (1m25.737123827s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (85.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-080015 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [623c200f-1131-4dc9-9625-e8ac54c0b5ba] Pending
helpers_test.go:352: "busybox" [623c200f-1131-4dc9-9625-e8ac54c0b5ba] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [623c200f-1131-4dc9-9625-e8ac54c0b5ba] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004497047s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-080015 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-078387 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2aa0029c-fac2-4195-bf43-84430472b226] Pending
helpers_test.go:352: "busybox" [2aa0029c-fac2-4195-bf43-84430472b226] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2aa0029c-fac2-4195-bf43-84430472b226] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004835959s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-078387 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-080015 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-080015 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (88.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-080015 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-080015 --alsologtostderr -v=3: (1m28.963310716s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (88.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-078387 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-078387 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (88.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-078387 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-078387 --alsologtostderr -v=3: (1m28.715192297s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (88.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-185510 -n old-k8s-version-185510
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-185510 -n old-k8s-version-185510: exit status 7 (60.454505ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-185510 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (44.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-185510 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-185510 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (44.422959905s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-185510 -n old-k8s-version-185510
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (44.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-463502 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a3558dab-643f-4a12-a522-c84165a00625] Pending
E1027 20:18:46.121392   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [a3558dab-643f-4a12-a522-c84165a00625] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a3558dab-643f-4a12-a522-c84165a00625] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004299654s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-463502 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-463502 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-463502 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (87.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-463502 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-463502 --alsologtostderr -v=3: (1m27.223895022s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (87.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (15.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-s5qx8" [a11e913a-d391-4c5b-b41c-accfbe188b67] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-s5qx8" [a11e913a-d391-4c5b-b41c-accfbe188b67] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.004233298s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (15.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-s5qx8" [a11e913a-d391-4c5b-b41c-accfbe188b67] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003954287s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-185510 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-185510 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-185510 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-185510 -n old-k8s-version-185510
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-185510 -n old-k8s-version-185510: exit status 2 (207.333568ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-185510 -n old-k8s-version-185510
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-185510 -n old-k8s-version-185510: exit status 2 (210.709603ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-185510 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-185510 -n old-k8s-version-185510
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-185510 -n old-k8s-version-185510
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (44.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-528878 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-528878 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (44.351469992s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (44.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-080015 -n no-preload-080015
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-080015 -n no-preload-080015: exit status 7 (62.353532ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-080015 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (73.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-080015 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-080015 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m12.727172269s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-080015 -n no-preload-080015
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (73.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-078387 -n embed-certs-078387
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-078387 -n embed-certs-078387: exit status 7 (74.972985ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-078387 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (73.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-078387 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-078387 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m13.340713565s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-078387 -n embed-certs-078387
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (73.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-463502 -n default-k8s-diff-port-463502
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-463502 -n default-k8s-diff-port-463502: exit status 7 (69.394288ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-463502 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (73.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-463502 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-463502 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m13.352889153s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-463502 -n default-k8s-diff-port-463502
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (73.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-528878 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-528878 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.513752946s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-528878 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-528878 --alsologtostderr -v=3: (7.220131617s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-528878 -n newest-cni-528878
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-528878 -n newest-cni-528878: exit status 7 (66.122228ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-528878 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (67.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-528878 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-528878 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m7.025529991s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-528878 -n newest-cni-528878
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (67.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-wwvzv" [c783b67b-5666-4ab4-be9c-bc33d26d946a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005819607s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2bj5n" [47f2677f-380f-452a-bffd-1015e9e26591] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2bj5n" [47f2677f-380f-452a-bffd-1015e9e26591] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.005305922s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-wwvzv" [c783b67b-5666-4ab4-be9c-bc33d26d946a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00616031s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-080015 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-080015 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-080015 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-080015 --alsologtostderr -v=1: (1.025969789s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-080015 -n no-preload-080015
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-080015 -n no-preload-080015: exit status 2 (240.862119ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-080015 -n no-preload-080015
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-080015 -n no-preload-080015: exit status 2 (237.159835ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-080015 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-080015 -n no-preload-080015
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-080015 -n no-preload-080015
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (88.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-764820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-764820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m28.13751658s)
--- PASS: TestNetworkPlugins/group/auto/Start (88.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2bj5n" [47f2677f-380f-452a-bffd-1015e9e26591] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013169532s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-078387 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-078387 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-078387 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-078387 --alsologtostderr -v=1: (1.664023495s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-078387 -n embed-certs-078387
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-078387 -n embed-certs-078387: exit status 2 (266.483055ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-078387 -n embed-certs-078387
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-078387 -n embed-certs-078387: exit status 2 (259.770792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-078387 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-078387 --alsologtostderr -v=1: (1.214242878s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-078387 -n embed-certs-078387
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-078387 -n embed-certs-078387
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pczcn" [a596375f-fa7c-438f-a70a-048bbbe79adf] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pczcn" [a596375f-fa7c-438f-a70a-048bbbe79adf] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.005235774s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (101.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-764820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-764820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m41.177684243s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (101.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pczcn" [a596375f-fa7c-438f-a70a-048bbbe79adf] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005012392s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-463502 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-463502 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-463502 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-463502 --alsologtostderr -v=1: (1.082197255s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-463502 -n default-k8s-diff-port-463502
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-463502 -n default-k8s-diff-port-463502: exit status 2 (266.152459ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-463502 -n default-k8s-diff-port-463502
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-463502 -n default-k8s-diff-port-463502: exit status 2 (310.453571ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-463502 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p default-k8s-diff-port-463502 --alsologtostderr -v=1: (1.143675188s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-463502 -n default-k8s-diff-port-463502
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-463502 -n default-k8s-diff-port-463502
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-528878 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-528878 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-528878 --alsologtostderr -v=1: (1.135173635s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-528878 -n newest-cni-528878
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-528878 -n newest-cni-528878: exit status 2 (315.881548ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-528878 -n newest-cni-528878
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-528878 -n newest-cni-528878: exit status 2 (298.227843ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-528878 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-528878 -n newest-cni-528878
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-528878 -n newest-cni-528878
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (105.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-764820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E1027 20:21:59.322935   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/functional-074768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:22:03.312941   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/old-k8s-version-185510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:22:03.319462   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/old-k8s-version-185510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:22:03.331014   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/old-k8s-version-185510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:22:03.352560   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/old-k8s-version-185510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:22:03.394696   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/old-k8s-version-185510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:22:03.476314   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/old-k8s-version-185510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:22:03.637907   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/old-k8s-version-185510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:22:03.959636   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/old-k8s-version-185510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:22:04.601710   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/old-k8s-version-185510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:22:05.883630   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/old-k8s-version-185510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:22:08.445557   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/old-k8s-version-185510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:22:13.567814   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/old-k8s-version-185510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:22:16.242139   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/functional-074768/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:22:23.810123   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/old-k8s-version-185510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:22:44.291638   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/old-k8s-version-185510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-764820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m45.626780776s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (105.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-764820 "pgrep -a kubelet"
I1027 20:22:54.597119   62705 config.go:182] Loaded profile config "auto-764820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (14.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-764820 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rqslb" [7ae61177-9490-4a65-bcd1-01d1d2b7c2f0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rqslb" [7ae61177-9490-4a65-bcd1-01d1d2b7c2f0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 14.003542206s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (14.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-764820 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-764820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-764820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-6mwp5" [be590ad8-f5c2-42aa-a1ac-55c08a3e56e6] Running
E1027 20:23:20.365621   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/no-preload-080015/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005773149s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (87.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-764820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E1027 20:23:25.253549   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/old-k8s-version-185510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-764820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m27.074604953s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (87.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-764820 "pgrep -a kubelet"
E1027 20:23:25.487472   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/no-preload-080015/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1027 20:23:25.493360   62705 config.go:182] Loaded profile config "kindnet-764820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-764820 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mddwk" [7f9e03a2-4f55-4452-97ba-ecbb25d271f2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mddwk" [7f9e03a2-4f55-4452-97ba-ecbb25d271f2] Running
E1027 20:23:35.729092   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/no-preload-080015/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004023979s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-764820 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-764820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-764820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-764820 "pgrep -a kubelet"
I1027 20:23:41.237830   62705 config.go:182] Loaded profile config "custom-flannel-764820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-764820 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zq7kr" [c60261df-30bf-4e55-9a9f-e79af4058332] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1027 20:23:45.305168   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/default-k8s-diff-port-463502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:23:45.311618   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/default-k8s-diff-port-463502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:23:45.323020   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/default-k8s-diff-port-463502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:23:45.344357   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/default-k8s-diff-port-463502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:23:45.386578   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/default-k8s-diff-port-463502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:23:45.468452   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/default-k8s-diff-port-463502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:23:45.629898   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/default-k8s-diff-port-463502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:23:45.952011   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/default-k8s-diff-port-463502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:23:46.120940   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/addons-864929/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-zq7kr" [c60261df-30bf-4e55-9a9f-e79af4058332] Running
E1027 20:23:46.593530   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/default-k8s-diff-port-463502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:23:47.875828   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/default-k8s-diff-port-463502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:23:50.437775   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/default-k8s-diff-port-463502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.005201914s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-764820 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-764820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-764820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (71.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-764820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E1027 20:23:55.559218   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/default-k8s-diff-port-463502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:23:56.210906   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/no-preload-080015/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:24:05.801283   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/default-k8s-diff-port-463502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-764820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m11.479228853s)
--- PASS: TestNetworkPlugins/group/flannel/Start (71.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (95.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-764820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E1027 20:24:26.283378   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/default-k8s-diff-port-463502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:24:37.172242   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/no-preload-080015/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:24:47.175190   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/old-k8s-version-185510/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-764820 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m35.027804205s)
--- PASS: TestNetworkPlugins/group/bridge/Start (95.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-764820 "pgrep -a kubelet"
I1027 20:24:51.249990   62705 config.go:182] Loaded profile config "enable-default-cni-764820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-764820 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-962ck" [bf11be4b-21bd-4dc5-a026-816d09ea51a2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-962ck" [bf11be4b-21bd-4dc5-a026-816d09ea51a2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004398305s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-764820 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-764820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-764820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-j8l79" [268bce50-6578-419b-aae6-1a4cf7b3c675] Running
E1027 20:25:07.245070   62705 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/default-k8s-diff-port-463502/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005000108s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-764820 "pgrep -a kubelet"
I1027 20:25:11.263265   62705 config.go:182] Loaded profile config "flannel-764820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-764820 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-tj54m" [a26d4a98-e98e-4968-a17a-6272ae0180f9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-tj54m" [a26d4a98-e98e-4968-a17a-6272ae0180f9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003456149s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:95: (dbg) Run:  out/minikube-linux-amd64 -p guest-291039 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:95: (dbg) Run:  out/minikube-linux-amd64 -p guest-291039 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:95: (dbg) Run:  out/minikube-linux-amd64 -p guest-291039 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:95: (dbg) Run:  out/minikube-linux-amd64 -p guest-291039 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:95: (dbg) Run:  out/minikube-linux-amd64 -p guest-291039 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.16s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:95: (dbg) Run:  out/minikube-linux-amd64 -p guest-291039 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.16s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.16s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:95: (dbg) Run:  out/minikube-linux-amd64 -p guest-291039 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-764820 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-764820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-764820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-764820 "pgrep -a kubelet"
I1027 20:25:44.094399   62705 config.go:182] Loaded profile config "bridge-764820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-764820 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-t9sqz" [48428892-7405-4b6d-9a65-f50bd65b169c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-t9sqz" [48428892-7405-4b6d-9a65-f50bd65b169c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004088437s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-764820 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-764820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-764820 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    

Test skip (40/336)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.31
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
137 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
138 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
139 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
140 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
142 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
143 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
270 TestStartStop/group/disable-driver-mounts 0.18
274 TestNetworkPlugins/group/kubenet 4.22
282 TestNetworkPlugins/group/cilium 4.46
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-864929 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-842975" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-842975
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-764820 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-764820

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-764820

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-764820

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-764820

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-764820

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-764820

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-764820

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-764820

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-764820

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-764820

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764820"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764820"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764820"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-764820

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764820"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764820"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-764820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-764820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-764820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-764820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-764820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-764820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-764820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-764820" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764820"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764820"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764820"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764820"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764820"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-764820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-764820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-764820" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764820"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764820"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764820"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764820"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764820"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21801-58821/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Oct 2025 20:12:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.87:8443
name: cert-expiration-888375
contexts:
- context:
cluster: cert-expiration-888375
extensions:
- extension:
last-update: Mon, 27 Oct 2025 20:12:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-888375
name: cert-expiration-888375
current-context: ""
kind: Config
users:
- name: cert-expiration-888375
user:
client-certificate: /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/cert-expiration-888375/client.crt
client-key: /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/cert-expiration-888375/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-764820

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764820"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764820"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764820"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764820"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764820"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764820"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764820"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764820"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764820"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764820"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764820"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764820"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764820"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764820"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764820"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764820"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764820"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764820"

                                                
                                                
----------------------- debugLogs end: kubenet-764820 [took: 4.051711276s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-764820" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-764820
--- SKIP: TestNetworkPlugins/group/kubenet (4.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-764820 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-764820

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-764820

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-764820

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-764820

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-764820

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-764820

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-764820

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-764820

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-764820

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-764820

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764820"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764820"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764820"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-764820

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764820"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764820"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-764820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-764820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-764820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-764820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-764820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-764820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-764820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-764820" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764820"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764820"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764820"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764820"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764820"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-764820

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-764820

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-764820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-764820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-764820

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-764820

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-764820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-764820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-764820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-764820" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-764820" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764820"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764820"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764820"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764820"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764820"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21801-58821/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Oct 2025 20:12:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.87:8443
name: cert-expiration-888375
contexts:
- context:
cluster: cert-expiration-888375
extensions:
- extension:
last-update: Mon, 27 Oct 2025 20:12:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-888375
name: cert-expiration-888375
current-context: ""
kind: Config
users:
- name: cert-expiration-888375
user:
client-certificate: /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/cert-expiration-888375/client.crt
client-key: /home/jenkins/minikube-integration/21801-58821/.minikube/profiles/cert-expiration-888375/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-764820

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764820"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764820"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764820"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764820"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764820"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764820"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764820"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764820"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764820"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764820"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764820"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764820"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764820"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764820"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764820"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764820"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764820"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-764820" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764820"

                                                
                                                
----------------------- debugLogs end: cilium-764820 [took: 4.272246928s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-764820" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-764820
--- SKIP: TestNetworkPlugins/group/cilium (4.46s)

                                                
                                    
Copied to clipboard